perm filename MENTAL.XGP[F76,JMC]3 blob
sn#251095 filedate 1976-12-03 generic text, type T, neo UTF8
/LMAR=0/XLINE=3/FONT#0=BAXL30/FONT#1=BAXM30/FONT#2=BASB30/FONT#3=SUB/FONT#4=SUP/FONT#5=BASL35/FONT#6=NGR25/FONT#7=MATH30/FONT#8=FIX25/FONT#9=GRKB30
␈↓ α∧␈↓␈↓ εddraft
␈↓ α∧␈↓α␈↓ ∧↔ASCRIBING MENTAL QUALITIES TO MACHINES
␈↓ α∧␈↓Abstract:␈αAscribing␈αmental␈αqualities␈α
like␈α␈↓↓beliefs,␈↓␈α␈↓↓intentions␈↓␈αand␈α
␈↓↓wants␈↓␈αto␈αmachines␈αis␈α
correct␈αand
␈↓ α∧␈↓useful␈α⊂if␈α⊃done␈α⊂conservatively.␈α⊂ We␈α⊃propose␈α⊂some␈α⊃new␈α⊂definitional␈α⊂tools␈α⊃for␈α⊂this:␈α⊃second␈α⊂order
␈↓ α∧␈↓structural definitions and definitions relative to an approximate theory.
␈↓ α∧␈↓(this draft of MENTAL[S76,JMC]@SU-AI compiled at 16:41 on December 3, 1976)
␈↓ α∧␈↓␈↓ ε|1␈↓ ∧
␈↓ α∧␈↓αINTRODUCTION
␈↓ α∧␈↓␈↓ αTTo␈αascribe␈αcertain␈α␈↓↓beliefs␈↓,␈α␈↓↓knowledge␈↓,␈α␈↓↓free␈αwill␈↓,␈α␈↓↓intentions␈↓,␈α␈↓↓consciousness␈↓,␈α␈↓↓abilities␈↓␈αor␈α␈↓↓wants␈↓␈αto
␈↓ α∧␈↓a␈α∩machine␈α∪or␈α∩computer␈α∩program␈α∪is␈α∩␈↓αlegitimate␈↓␈α∪when␈α∩such␈α∩an␈α∪ascription␈α∩expresses␈α∪the␈α∩same
␈↓ α∧␈↓information␈αabout␈αthe␈αmachine␈αthat␈αit␈αexpresses␈αabout␈αa␈αperson.␈α It␈αis␈α␈↓αuseful␈↓␈αwhen␈αthe␈αascription
␈↓ α∧␈↓helps␈αus␈αunderstand␈αthe␈αstructure␈αof␈αthe␈αmachine,␈αits␈αpast␈αor␈αfuture␈αbehavior,␈αor␈αhow␈αto␈αrepair␈αor
␈↓ α∧␈↓improve␈αit.␈α It␈αis␈αperhaps␈αnever␈α␈↓αlogically␈αrequired␈↓␈αeven␈αfor␈αhumans,␈αbut␈αa␈αpractical␈αtheory␈αof␈αthe
␈↓ α∧␈↓behavior␈α∞of␈α∞machines␈α∞or␈α
humans␈α∞may␈α∞require␈α∞mental␈α
qualities␈α∞or␈α∞qualities␈α∞isomorphic␈α∞to␈α
them.
␈↓ α∧␈↓Theories␈α
of␈α
belief,␈α
knowledge␈α
and␈α
wanting␈α
can␈α
be␈α
constructed␈α
for␈α
machines␈α
in␈α
a␈α∞simpler␈α
setting
␈↓ α∧␈↓than␈α↔for␈α↔humans␈α↔and␈α↔later␈α↔applied␈α⊗to␈α↔humans.␈α↔ Ascription␈α↔of␈α↔mental␈α↔qualities␈α↔is␈α⊗␈↓αmost
␈↓ α∧␈↓αstraightforward␈↓␈α∂for␈α∂machines␈α⊂of␈α∂known␈α∂structure␈α∂such␈α⊂as␈α∂thermostats␈α∂and␈α⊂computer␈α∂operating
␈↓ α∧␈↓systems, but is ␈↓αmost useful␈↓ when applied to entities whose structure is very incompletely known.
␈↓ α∧␈↓␈↓ αTThe␈α
above␈α
views␈α
are␈αmotivated␈α
by␈α
work␈α
in␈αartificial␈α
intelligence␈↓∧1␈↓␈α
(abbreviated␈α
AI).␈α They
␈↓ α∧␈↓can␈αbe␈αtaken␈αas␈αasserting␈αthat␈αmany␈αof␈αthe␈αphilosophical␈αproblems␈αof␈αmind␈αtake␈αa␈αpractical␈αform
␈↓ α∧␈↓as␈αsoon␈αas␈αone␈αtakes␈αseriously␈αthe␈α
idea␈αof␈αmaking␈αmachines␈αbehave␈αintelligently.␈α In␈αparticular,␈α
AI
␈↓ α∧␈↓raises␈α⊃for␈α⊃machines␈α⊃two␈α⊃issues␈α⊃that␈α⊃have␈α⊃heretofore␈α⊃been␈α⊃considered␈α⊃only␈α⊃in␈α⊃connection␈α⊃with
␈↓ α∧␈↓people.
␈↓ α∧␈↓␈↓ αTFirst,␈α∞in␈α∞designing␈α∞intelligent␈α∞programs␈α∞and␈α∞looking␈α∞at␈α∞them␈α∞from␈α∞the␈α∞outside␈α∞we␈α∞need␈α
to
␈↓ α∧␈↓determine␈α
the␈α
conditions␈α
under␈αwhich␈α
specific␈α
mental␈α
and␈αvolitional␈α
terms␈α
are␈α
applicable.␈α We␈α
can
␈↓ α∧␈↓exemplify␈α∂these␈α∂problems␈α⊂by␈α∂asking␈α∂when␈α∂might␈α⊂it␈α∂be␈α∂legitimate␈α∂to␈α⊂say␈α∂about␈α∂a␈α∂machine,␈α⊂␈↓↓"␈α∂It
␈↓ α∧␈↓↓knows I want a reservation to Boston, and it can give it to me, but it won't"␈↓.
␈↓ α∧␈↓␈↓ αTSecond,␈αwhen␈αwe␈αwant␈αa␈α␈↓αgenerally␈αintelligent␈↓␈↓∧2␈↓␈αcomputer␈αprogram,␈αwe␈αmust␈αbuild␈αinto␈αit␈αa
␈↓ α∧␈↓␈↓αgeneral␈αview␈↓␈αof␈αwhat␈αthe␈αworld␈αis␈αlike␈αwith␈αespecial␈αattention␈αto␈αfacts␈αabout␈αhow␈αthe␈αinformation
␈↓ α∧␈↓required␈αto␈αsolve␈αproblems␈αis␈αto␈αbe␈αobtained␈αand␈αused.␈α Thus␈αwe␈αmust␈αprovide␈αit␈αwith␈αsome␈αkind
␈↓ α∧␈↓of ␈↓↓metaphysics␈↓ (general world-view) and ␈↓↓epistemology␈↓ (theory of knowledge) however naive.
␈↓ α∧␈↓␈↓ αTAs␈αmuch␈αas␈αpossible,␈αwe␈αwill␈αascribe␈αmental␈αqualities␈αseparately␈αfrom␈αeach␈αother␈αinstead␈αof
␈↓ α∧␈↓bundling␈α∞them␈α∞in␈α∞a␈α∞concept␈α∞of␈α∂mind.␈α∞ This␈α∞is␈α∞necessary,␈α∞because␈α∞present␈α∞machines␈α∂have␈α∞rather
␈↓ α∧␈↓varied␈α∂little␈α∂minds;␈α∂the␈α∂mental␈α∂qualities␈α∂that␈α∂can␈α∂legitimately␈α∂be␈α∂ascribed␈α∂to␈α∂them␈α∂are␈α∂few␈α∞and
␈↓ α∧␈↓differ␈αfrom␈αmachine␈αto␈αmachine.␈α We␈αwill␈αnot␈α
even␈αtry␈αto␈αmeet␈αobjections␈αlike,␈α␈↓↓"Unless␈αit␈αalso␈α
does
␈↓ α∧␈↓↓X, it is illegitimate to speak of its having mental qualities."␈↓
␈↓ α∧␈↓␈↓ αTMachines␈αas␈αsimple␈αas␈αthermostats␈αcan␈αbe␈αsaid␈αto␈αhave␈αbeliefs,␈αand␈αhaving␈αbeliefs␈αseems␈αto
␈↓ α∧␈↓be␈α⊃a␈α⊃characteristic␈α⊃of␈α⊃most␈α⊃machines␈α⊃capable␈α⊃of␈α⊃problem␈α⊃solving␈α⊃performance.␈α⊃ However,␈α⊂the
␈↓ α∧␈↓machines␈α∩mankind␈α∩has␈α⊃so␈α∩far␈α∩found␈α∩it␈α⊃useful␈α∩to␈α∩construct␈α⊃rarely␈α∩have␈α∩beliefs␈α∩about␈α⊃beliefs.
␈↓ α∧␈↓(Beliefs␈αabout␈αbeliefs␈αwill␈αbe␈αneeded␈αby␈αcomputer␈αprograms␈αto␈αreason␈αabout␈αwhat␈αknowledge␈αthey
␈↓ α∧␈↓lack␈αand␈α
where␈αto␈α
get␈αit).␈α
Mental␈αqualities␈α
peculiar␈αto␈α
human-like␈αmotivational␈α
structures,␈αsuch␈α
as
␈↓ α∧␈↓love␈α⊃and␈α⊂hate,␈α⊃will␈α⊂not␈α⊃be␈α⊃required␈α⊂for␈α⊃intelligent␈α⊂behavior,␈α⊃but␈α⊂we␈α⊃could␈α⊃probably␈α⊂program
␈↓ α∧␈↓computers␈α∩to␈α∩exhibit␈α∩them␈α∩if␈α∩we␈α∩wanted␈α⊃to,␈α∩because␈α∩our␈α∩common␈α∩sense␈α∩notions␈α∩about␈α⊃them
␈↓ α∧␈↓translate␈αreadily␈α
into␈αcertain␈αprogram␈α
and␈αdata␈α
structures.␈α Still␈αother␈α
mental␈αqualities,␈αe.g.␈α
humor
␈↓ α∧␈↓and␈α⊃appreciation␈α⊃of␈α⊃beauty,␈α⊃seem␈α⊃much␈α⊃harder␈α⊃to␈α⊃model.␈α⊃ While␈α⊃we␈α⊃will␈α⊃be␈α⊃quite␈α∩liberal␈α⊃in
␈↓ α∧␈↓ascribing␈α
␈↓↓some␈↓␈αmental␈α
qualities␈αeven␈α
to␈αrather␈α
primitive␈αmachines,␈α
we␈αwill␈α
try␈αto␈α
be␈αconservative
␈↓ α∧␈↓in our criteria for ascribing any ␈↓↓particular␈↓ quality.
␈↓ α∧␈↓␈↓ ε|2␈↓ ∧
␈↓ α∧␈↓␈↓ αTThe␈α
successive␈α
sections␈α
of␈α
this␈α
paper␈α∞will␈α
give␈α
philosophical␈α
and␈α
AI␈α
reasons␈α∞for␈α
ascribing
␈↓ α∧␈↓beliefs␈αto␈αmachines,␈αtwo␈αnew␈αforms␈αof␈αdefinition␈αthat␈αseem␈αnecessary␈αfor␈αdefining␈αmental␈αqualities
␈↓ α∧␈↓and␈αexamples␈αof␈αtheir␈αuse,␈αexamples␈αof␈αsystems␈αto␈αwhich␈αmental␈αqualities␈αare␈αascribed,␈αsome␈αfirst
␈↓ α∧␈↓attempts␈α∩at␈α∩defining␈α⊃a␈α∩variety␈α∩of␈α∩mental␈α⊃qualities,␈α∩some␈α∩criticisms␈α⊃of␈α∩other␈α∩views␈α∩on␈α⊃mental
␈↓ α∧␈↓qualities, notes, and references.
␈↓ α∧␈↓␈↓ ε|3␈↓ ∧
␈↓ α∧␈↓α␈↓ ∧nWHY ASCRIBE MENTAL QUALITIES?
␈↓ α∧␈↓␈↓ αT␈↓αWhy␈α
should␈α
we␈α
want␈α
to␈α
ascribe␈α
beliefs␈αto␈α
machines␈α
at␈α
all?␈↓␈α
This␈α
is␈α
the␈α
opposite␈αquestion␈α
to
␈↓ α∧␈↓that␈αof␈α␈↓↓reductionism␈↓.␈α Instead␈αof␈αasking␈αhow␈αmental␈αqualities␈αcan␈αbe␈α␈↓αreduced␈↓␈αto␈αphysical␈αones,␈αwe
␈↓ α∧␈↓ask␈α→how␈α_to␈α→␈↓αascribe␈↓␈α→mental␈α_qualities␈α→to␈α_physical␈α→systems.␈α→ This␈α_question␈α→may␈α→be␈α_more
␈↓ α∧␈↓straightforward and may lead to better answers to the questions of reductionism.
␈↓ α∧␈↓␈↓ αTTo␈α⊃put␈α⊃the␈α⊂issue␈α⊃sharply,␈α⊃consider␈α⊂a␈α⊃computer␈α⊃program␈α⊂for␈α⊃which␈α⊃we␈α⊃possess␈α⊂complete
␈↓ α∧␈↓listings.␈α
The␈α
behavior␈α
of␈α
the␈α
program␈α
in␈α
any␈α
environment␈α
is␈α
determined␈α
from␈α
the␈α
structure␈α
of␈α
the
␈↓ α∧␈↓program␈α∂and␈α∞can␈α∂be␈α∂found␈α∞out␈α∂by␈α∞simulating␈α∂the␈α∂action␈α∞of␈α∂the␈α∞program␈α∂and␈α∂the␈α∞environment
␈↓ α∧␈↓without␈α∂having␈α∂to␈α∂deal␈α∂with␈α∂any␈α∂concept␈α∂of␈α∂belief.␈α∂ Nevertheless,␈α∂there␈α∂are␈α∂several␈α∂reasons␈α∞for
␈↓ α∧␈↓ascribing belief and other mental qualities:
␈↓ α∧␈↓␈↓ αT1.␈α
Although␈αwe␈α
may␈αknow␈α
the␈αprogram,␈α
its␈αstate␈α
at␈αa␈α
given␈αmoment␈α
is␈αusually␈α
not␈αdirectly
␈↓ α∧␈↓observable,␈αand␈αthe␈αfacts␈αwe␈αcan␈αobtain␈αabout␈αits␈αcurrent␈αstate␈αmay␈αbe␈αmore␈αreadily␈αexpressed␈αby
␈↓ α∧␈↓ascribing certain beliefs or wants than in any other way.
␈↓ α∧␈↓␈↓ αT2.␈α⊃Even␈α⊂if␈α⊃we␈α⊃can␈α⊂simulate␈α⊃the␈α⊃interaction␈α⊂of␈α⊃our␈α⊃program␈α⊂with␈α⊃its␈α⊃environment␈α⊂using
␈↓ α∧␈↓another␈α
more␈α
comprehensive␈α∞program,␈α
the␈α
simulation␈α∞may␈α
be␈α
a␈α
billion␈α∞times␈α
too␈α
slow.␈α∞ We␈α
also
␈↓ α∧␈↓may␈αnot␈αhave␈αthe␈αinitial␈αconditions␈αof␈αthe␈αenvironment␈αor␈αthe␈αenvironment's␈αlaws␈αof␈αmotion␈αin␈αa
␈↓ α∧␈↓suitable␈α∂form,␈α∞whereas␈α∂it␈α∞may␈α∂be␈α∞feasible␈α∂to␈α∞make␈α∂a␈α∞prediction␈α∂of␈α∞the␈α∂effects␈α∞of␈α∂the␈α∂beliefs␈α∞we
␈↓ α∧␈↓ascribe to the program without any computer at all.
␈↓ α∧␈↓␈↓ αT3.␈α∂Ascribing␈α∂beliefs␈α∞may␈α∂allow␈α∂deriving␈α∂general␈α∞statements␈α∂about␈α∂the␈α∂program's␈α∞behavior
␈↓ α∧␈↓that could not be obtained from any finite number of simulations.
␈↓ α∧␈↓␈↓ αT4.␈α
The␈α
belief␈α
and␈α
goal␈α
structures␈α
we␈α∞ascribe␈α
to␈α
the␈α
program␈α
may␈α
be␈α
easier␈α∞to␈α
understand
␈↓ α∧␈↓than the details of program as expressed in its listing.
␈↓ α∧␈↓␈↓ αT5.␈α∂The␈α∂belief␈α∞and␈α∂goal␈α∂structure␈α∞is␈α∂likely␈α∂to␈α∂be␈α∞close␈α∂to␈α∂the␈α∞structure␈α∂the␈α∂designer␈α∂of␈α∞the
␈↓ α∧␈↓program␈αhad␈αin␈αmind,␈αand␈αit␈αmay␈αbe␈αeasier␈αto␈αdebug␈αthe␈αprogram␈αin␈αterms␈αof␈αthis␈αstructure␈αthan
␈↓ α∧␈↓directly␈αfrom␈αthe␈αlisting.␈α
In␈αfact,␈αit␈αis␈αoften␈α
possible␈αfor␈αsomeone␈αto␈α
correct␈αa␈αfault␈αby␈αreasoning␈α
in
␈↓ α∧␈↓general␈α∞terms␈α∂about␈α∞the␈α∂information␈α∞in␈α∞a␈α∂program␈α∞or␈α∂machine,␈α∞diagnosing␈α∞what␈α∂is␈α∞wrong␈α∂as␈α∞a
␈↓ α∧␈↓false␈α
belief,␈α
and␈α
looking␈α∞at␈α
the␈α
details␈α
of␈α
the␈α∞program␈α
or␈α
machine␈α
only␈α
sufficiently␈α∞to␈α
determine
␈↓ α∧␈↓how the false belief is represented and what mechanism caused it to arise.␈↓∧3␈↓
␈↓ α∧␈↓␈↓ αTAll␈α∩the␈α⊃above␈α∩reasons␈α⊃for␈α∩ascribing␈α⊃beliefs␈α∩are␈α⊃epistemological.␈α∩ i.e.␈α⊃ascribing␈α∩beliefs␈α⊃is
␈↓ α∧␈↓needed␈α⊂to␈α⊂adapt␈α∂to␈α⊂limitations␈α⊂on␈α∂our␈α⊂ability␈α⊂to␈α∂acquire␈α⊂knowledge,␈α⊂use␈α∂it␈α⊂for␈α⊂prediction,␈α∂and
␈↓ α∧␈↓establish␈αgeneralizations␈αin␈αterms␈αof␈αthe␈αelementary␈αstructure␈αof␈αthe␈αprogram.␈α Perhaps␈αthis␈αis␈αthe
␈↓ α∧␈↓general reason for ascribing higher levels of organization to systems.
␈↓ α∧␈↓␈↓ αTComputers␈αgive␈αrise␈αto␈αnumerous␈αexamples␈αof␈αbuilding␈αa␈αhigher␈αstructure␈αon␈αthe␈αbasis␈αof␈αa
␈↓ α∧␈↓lower␈α
and␈α
conducting␈α
subsequent␈α
analyses␈α
using␈α
the␈α
higher␈α
structure.␈α
The␈α
geometry␈α
of␈αthe␈α
electric
␈↓ α∧␈↓fields␈α
in␈αa␈α
transistor␈α
and␈αits␈α
chemical␈α
composition␈αgive␈α
rise␈αto␈α
its␈α
properties␈αas␈α
an␈α
electric␈αcircuit
␈↓ α∧␈↓element.␈α
Transistors␈α
are␈αcombined␈α
in␈α
small␈αcircuits␈α
and␈α
powered␈α
in␈αstandard␈α
ways␈α
to␈αmake␈α
logical
␈↓ α∧␈↓elements␈α
such␈α
as␈α
ANDs,␈α
ORs,␈α
NOTs␈α
and␈α
flip-flops.␈α
Computers␈α
are␈α
designed␈α
with␈α∞these␈α
logical
␈↓ α∧␈↓elements␈αto␈αobey␈αa␈αdesired␈αorder␈αcode;␈αthe␈αdesigner␈αusually␈αneedn't␈αconsider␈αthe␈αproperties␈α
of␈αthe
␈↓ α∧␈↓␈↓ ε|4␈↓ ∧
␈↓ α∧␈↓transistors␈αas␈αcircuit␈α
elements.␈α The␈αdesigner␈α
of␈αa␈αhigher␈α
level␈αlanguage␈αworks␈α
with␈αthe␈αorder␈α
code
␈↓ α∧␈↓and␈α⊂doesn't␈α∂have␈α⊂to␈α⊂know␈α∂about␈α⊂the␈α⊂ANDs␈α∂and␈α⊂ORs;␈α∂the␈α⊂user␈α⊂of␈α∂the␈α⊂higher␈α⊂order␈α∂language
␈↓ α∧␈↓needn't know the computer's order code.
␈↓ α∧␈↓␈↓ αTIn␈αthe␈αabove␈αcases,␈α
users␈αof␈αthe␈αhigher␈αlevel␈α
can␈αcompletely␈αignore␈αthe␈αlower␈α
level,␈αbecause
␈↓ α∧␈↓the␈αbehavior␈αof␈αthe␈αhigher␈α
level␈αsystem␈αis␈αcompletely␈αdetermined␈α
by␈αthe␈αvalues␈αof␈αthe␈αhigher␈α
level
␈↓ α∧␈↓variables;␈α
e.g.␈α in␈α
order␈αto␈α
determine␈αthe␈α
outcome␈αof␈α
a␈αcomputer␈α
program,␈αone␈α
needn't␈αconsider␈α
the
␈↓ α∧␈↓flip-flops.␈α
However,␈α
when␈α
we␈α
ascribe␈α
mental␈α
structure␈α
to␈α
humans␈α
or␈α
goals␈α
to␈α
society,␈α∞we␈α
always
␈↓ α∧␈↓get␈α∞highly␈α∞incomplete␈α∂systems;␈α∞the␈α∞higher␈α∂level␈α∞behavior␈α∞cannot␈α∂be␈α∞fully␈α∞predicted␈α∂from␈α∞higher
␈↓ α∧␈↓level␈α⊃observations␈α⊃and␈α⊃higher␈α⊂level␈α⊃"laws"␈α⊃even␈α⊃when␈α⊂the␈α⊃underlying␈α⊃lower␈α⊃level␈α⊃behavior␈α⊂is
␈↓ α∧␈↓determinate.
␈↓ α∧␈↓␈↓ αTBesides␈α
the␈α
above␈αphilosophical␈α
reasons␈α
for␈αascribing␈α
mental␈α
qualities␈αto␈α
machines,␈α
I␈αshall
␈↓ α∧␈↓argue␈α⊂that␈α⊂in␈α⊂order␈α⊂to␈α⊂make␈α⊂machines␈α∂behave␈α⊂intelligently,␈α⊂we␈α⊂will␈α⊂have␈α⊂to␈α⊂program␈α⊂them␈α∂to
␈↓ α∧␈↓ascribe beliefs etc. to each other and to people.
␈↓ α∧␈↓→→→→Here there will be more on machine's models of each others minds.←←←←
␈↓ α∧␈↓␈↓ ε|5␈↓ ∧
␈↓ α∧␈↓αTWO␈α METHODS␈α OF␈α DEFINITION␈α AND␈α THEIR␈α APPLICATION␈α TO␈α∨MENTAL
␈↓ α∧␈↓αQUALITIES
␈↓ α∧␈↓␈↓ αTIn␈α∂our␈α⊂opinion,␈α∂a␈α∂major␈α⊂source␈α∂of␈α⊂problems␈α∂in␈α∂defining␈α⊂mental␈α∂and␈α⊂other␈α∂philosophical
␈↓ α∧␈↓concepts␈α
is␈α
the␈α
weakness␈αof␈α
the␈α
methods␈α
of␈α
definition␈αthat␈α
have␈α
been␈α
␈↓↓explicitly␈↓␈α
used.␈α Therefore
␈↓ α∧␈↓we␈α
introduce␈α
two␈α
new␈↓∧4␈↓␈α
kinds␈α
of␈α
definition:␈α
␈↓↓second␈α
order␈α
structural␈α
definition␈↓␈α
and␈α␈↓↓definition␈α
relative
␈↓ α∧␈↓↓to an approximate theory␈↓ and their application to defining mental qualities.
␈↓ α∧␈↓1. ␈↓αSecond Order Structural Definition.␈↓
␈↓ α∧␈↓␈↓ αTStructural␈α∩definitions␈α∩of␈α∩qualities␈α∩are␈α∩given␈α∩in␈α∩terms␈α∩of␈α∩the␈α∩state␈α∩of␈α∩the␈α∪system␈α∩being
␈↓ α∧␈↓described while behavioral definitions are given in terms of its actual or potential behavior␈↓∧5␈↓.
␈↓ α∧␈↓␈↓ αTIf␈α∂the␈α∂structure␈α⊂of␈α∂the␈α∂machine␈α∂is␈α⊂known,␈α∂one␈α∂can␈α∂give␈α⊂an␈α∂ad␈α∂hoc␈α∂␈↓↓first␈α⊂order␈α∂structural
␈↓ α∧␈↓↓definition␈↓.␈α
This␈α
is␈α
a␈α
predicate␈α
␈↓↓B(s,p)␈↓␈α
where␈α
␈↓↓s␈↓␈α
represents␈α
a␈α
state␈α
of␈α
the␈α
machine␈α
and␈α
␈↓↓p␈↓␈α
represents␈α
a
␈↓ α∧␈↓sentence␈αin␈α
a␈αsuitable␈α
language,␈αand␈α
␈↓↓B(s,p)␈↓␈αis␈αthe␈α
assertion␈αthat␈α
when␈αthe␈α
machine␈αis␈α
in␈αstate␈α␈↓↓s,␈↓␈α
it
␈↓ α∧␈↓␈↓↓believes␈↓␈α⊃the␈α⊂sentence␈α⊃␈↓↓p.␈α⊂(The␈↓␈α⊃considerations␈α⊂of␈α⊃this␈α⊂paper␈α⊃are␈α⊂neutral␈α⊃in␈α⊂deciding␈α⊃whether␈α⊂to
␈↓ α∧␈↓regard␈α
the␈α
object␈α
of␈α∞belief␈α
as␈α
a␈α
sentence␈α∞or␈α
to␈α
use␈α
a␈α
modal␈α∞operator␈α
or␈α
to␈α
admit␈α∞␈↓↓propositions␈↓␈α
as
␈↓ α∧␈↓abstract␈αobjects␈αthat␈αcan␈αbe␈αbelieved.␈α The␈αpaper␈αis␈αwritten␈αas␈αthough␈αsentences␈αare␈αthe␈αobjects␈αof
␈↓ α∧␈↓belief,␈αbut␈αin␈α(McCarthy␈α1976)␈αI␈αfavor␈α
propositions).␈α A␈αgeneral␈α␈↓↓first␈↓␈α␈↓↓order␈↓␈αstructural␈αdefinition␈α
of
␈↓ α∧␈↓belief␈αwould␈α
be␈αa␈α
predicate␈α␈↓↓B(W,M,s,p)␈↓␈α
where␈α␈↓↓W␈↓␈α
is␈αthe␈α
"world"␈αin␈α
which␈αthe␈α
machine␈α␈↓↓M␈↓␈α
whose
␈↓ α∧␈↓beliefs␈α
are␈α
in␈α
question␈αis␈α
situated.␈α
I␈α
do␈α
not␈αsee␈α
how␈α
to␈α
give␈α
such␈αa␈α
definition␈α
of␈α
belief,␈α
and␈αI␈α
think
␈↓ α∧␈↓it is impossible. Therefore we turn to second order definitions.
␈↓ α∧␈↓␈↓ αTA␈α∩second␈α⊃order␈α∩structural␈α⊃definition␈α∩of␈α⊃belief␈α∩is␈α⊃a␈α∩second␈α⊃order␈α∩predicate␈α⊃␈↓↓β(W,M,B).␈↓
␈↓ α∧␈↓␈↓↓β(W,M,B)␈↓␈αasserts␈αthat␈αthe␈αfirst␈αorder␈αpredicate␈α␈↓↓B␈↓␈αis␈αa␈α"good"␈αnotion␈αof␈αbelief␈αfor␈αthe␈αmachine␈α␈↓↓M␈↓
␈↓ α∧␈↓in␈αthe␈αworld␈α␈↓↓W.␈↓␈αHere␈α"good"␈αmeans␈αthat␈αthe␈αbeliefs␈αthat␈α␈↓↓B␈↓␈αascribes␈αto␈α␈↓↓M␈↓␈αagree␈αwith␈αour␈αideas␈αof
␈↓ α∧␈↓what␈α⊂beliefs␈α∂␈↓↓M␈↓␈α⊂would␈α∂have,␈α⊂not␈α⊂that␈α∂the␈α⊂beliefs␈α∂themselves␈α⊂are␈α∂true.␈α⊂ The␈α⊂axiomatizations␈α∂of
␈↓ α∧␈↓belief in the literature are partial second order definitions.
␈↓ α∧␈↓␈↓ αTIn␈α⊂general,␈α⊂␈↓αa␈α⊃second␈α⊂order␈α⊂definition␈α⊃gives␈α⊂criteria␈α⊂for␈α⊃criticizing␈α⊂an␈α⊂ascription␈α⊃of␈α⊂a
␈↓ α∧␈↓αquality␈αto␈αa␈αsystem.␈↓␈αWe␈α
suggest␈αthat␈αboth␈αour␈αcommon␈α
sense␈αand␈αscientific␈αusage␈αof␈α
not-directly-
␈↓ α∧␈↓observable␈α
qualities␈αcorresponds␈α
more␈α
losely␈αto␈α
second␈αorder␈α
structural␈α
definition␈αthan␈α
to␈αany␈α
kind
␈↓ α∧␈↓of␈α∞behavioral␈α∞definition.␈α∞ Note␈α∞that␈α∞a␈α
second␈α∞order␈α∞definition␈α∞cannot␈α∞guarantee␈α∞that␈α∞there␈α
exist
␈↓ α∧␈↓predicates␈α␈↓↓B␈↓␈αmeeting␈αthe␈αcriterion␈α
β␈αor␈αthat␈αsuch␈αa␈α␈↓↓B␈↓␈α
is␈αunique.␈α Some␈αqualities␈αare␈α
best␈αdefined
␈↓ α∧␈↓jointly with related qualities, e.g. beliefs and goals may require joint treatment.
␈↓ α∧␈↓␈↓ αTSecond␈αorder␈αdefinitions␈αcriticize␈αwhole␈αbelief␈αstructures␈αrather␈αthan␈αindividual␈αbeliefs.␈α We
␈↓ α∧␈↓can␈αtreat␈αindividual␈αbeliefs␈αby␈αsaying␈αthat␈αa␈αsystem␈αbelieves␈α␈↓↓p␈↓␈αin␈αstate␈α␈↓↓s␈↓␈αprovided␈αall␈α"reasonably
␈↓ α∧␈↓good" ␈↓↓B␈↓'s satisfy ␈↓↓B(s,p)␈↓. Thus we are distinguishing the "intersection" of the reasonably good ␈↓↓B␈↓'s.
␈↓ α∧␈↓␈↓ αT(An␈α∀analogy␈α∀with␈α∀cryptography␈α∀may␈α∀be␈α∪helpful.␈α∀ We␈α∀solve␈α∀a␈α∀cryptogram␈α∀by␈α∪making
␈↓ α∧␈↓hypotheses␈αabout␈αthe␈αstructure␈αof␈αthe␈αcipher␈αand␈αabout␈αthe␈αtranslation␈αof␈αparts␈αof␈αthe␈αcipher␈αtext.
␈↓ α∧␈↓Our␈α
solution␈α
is␈α
complete␈α
when␈α
we␈α
have␈α"guessed"␈α
a␈α
cipher␈α
system␈α
that␈α
produces␈α
the␈αcryptogram
␈↓ α∧␈↓from␈α∂a␈α∂plausible␈α∂plaintext␈α⊂message.␈α∂ Though␈α∂we␈α∂never␈α∂prove␈α⊂that␈α∂our␈α∂solution␈α∂is␈α⊂unique,␈α∂two
␈↓ α∧␈↓different␈αsolutions␈αare␈αalmost␈αnever␈αfound␈αexcept␈αfor␈αvery␈αshort␈αcryptograms.␈α In␈αthe␈αanalogy,␈αthe
␈↓ α∧␈↓second␈αorder␈αdefinition␈α
β␈αcorresponds␈αto␈α
the␈αgeneral␈αidea␈α
of␈αencipherment,␈αand␈α
␈↓↓B␈↓␈αis␈αthe␈α
particular
␈↓ α∧␈↓␈↓ ε|6␈↓ ∧
␈↓ α∧␈↓system␈α
used.␈α∞ While␈α
we␈α∞will␈α
rarely␈α∞be␈α
able␈α∞to␈α
prove␈α∞uniqueness,␈α
we␈α∞don't␈α
expect␈α∞to␈α
find␈α∞two␈α
␈↓↓B␈↓s
␈↓ α∧␈↓both satisfying β).
␈↓ α∧␈↓␈↓ αTIt␈αseems␈αto␈αme␈αthat␈αthere␈αshould␈αbe␈αa␈αmetatheorem␈αof␈αmathematical␈αlogic␈αasserting␈αthat␈αnot
␈↓ α∧␈↓all␈α∪second␈α∪order␈α∪definitions␈α∪can␈α∪be␈α∩reduced␈α∪to␈α∪first␈α∪order␈α∪definitions␈α∪and␈α∪further␈α∩theorems
␈↓ α∧␈↓characterizing␈αthose␈α
second␈αorder␈α
definitions␈αthat␈αadmit␈α
such␈αreductions.␈α
Such␈αtechnical␈αresults,␈α
if
␈↓ α∧␈↓they␈α⊂can␈α∂be␈α⊂found,␈α∂may␈α⊂be␈α∂helpful␈α⊂in␈α∂philosophy␈α⊂and␈α∂in␈α⊂the␈α∂construction␈α⊂of␈α⊂formal␈α∂scientific
␈↓ α∧␈↓theories.␈α⊃ I␈α⊃would␈α⊃conjecture␈α⊃that␈α⊃many␈α⊃of␈α⊃the␈α⊃informal␈α⊃philosophical␈α⊃arguments␈α⊃that␈α⊂certain
␈↓ α∧␈↓mental␈αconcepts␈αcannot␈αbe␈αreduced␈αto␈αphysics␈αwill␈αturn␈αout␈αto␈αbe␈αsketches␈αof␈αarguments␈αthat␈αthese
␈↓ α∧␈↓concepts require second (or higher) order definitions.
␈↓ α∧␈↓␈↓ αTHere␈αis␈αan␈αapproximate␈α
second␈αorder␈αdefinition␈αof␈αbelief.␈α
For␈αeach␈αstate␈α␈↓↓s␈↓␈αof␈α
the␈αmachine
␈↓ α∧␈↓and␈α⊂each␈α⊂sentence␈α⊂␈↓↓p␈↓␈α⊂in␈α⊃a␈α⊂suitable␈α⊂language␈α⊂␈↓↓L,␈α⊂we␈↓␈α⊂assign␈α⊃truth␈α⊂to␈α⊂␈↓↓B(s,p)␈↓␈α⊂if␈α⊂and␈α⊂only␈α⊃if␈α⊂the
␈↓ α∧␈↓machine␈α⊂is␈α⊂considered␈α⊂to␈α⊂believe␈α⊂␈↓↓p␈↓␈α⊂when␈α⊂it␈α⊂is␈α⊂in␈α⊂state␈α⊂␈↓↓s␈↓.␈α⊂ The␈α⊂language␈α⊂␈↓↓L␈↓␈α⊂is␈α⊂chosen␈α⊃for␈α⊂our
␈↓ α∧␈↓convenience,␈α
and␈αthere␈α
is␈α
no␈αassumption␈α
that␈α
the␈αmachine␈α
explicitly␈α
represents␈αsentences␈α
of␈α
␈↓↓L␈↓␈αin
␈↓ α∧␈↓any␈α∂way.␈α∂ Thus␈α∂we␈α⊂can␈α∂talk␈α∂about␈α∂the␈α∂beliefs␈α⊂of␈α∂Chinese,␈α∂dogs,␈α∂corporations,␈α⊂thermostats,␈α∂and
␈↓ α∧␈↓computer␈α∂operating␈α∂systems␈α∂without␈α∞assuming␈α∂that␈α∂they␈α∂use␈α∞English␈α∂or␈α∂our␈α∂favorite␈α∂first␈α∞order
␈↓ α∧␈↓language.␈α∞ ␈↓↓L␈↓␈α∞may␈α∞or␈α∞may␈α∂not␈α∞be␈α∞the␈α∞language␈α∞be␈α∞the␈α∂language␈α∞we␈α∞are␈α∞using␈α∞for␈α∂making␈α∞other
␈↓ α∧␈↓assertions,␈α
e.g.␈α
we␈αcould,␈α
writing␈α
in␈αEnglish,␈α
systematically␈α
use␈α
French␈αsentences␈α
as␈α
objects␈αof␈α
belief.
␈↓ α∧␈↓However,␈αthe␈αbest␈αchoice␈αfor␈αartificial␈αintelligence␈αwork␈αmay␈αbe␈αto␈αmake␈α␈↓↓L␈↓␈αa␈αsubset␈αof␈αour␈α"outer"
␈↓ α∧␈↓language restricted so as to avoid the paradoxical self-references of (Montague 1963).
␈↓ α∧␈↓␈↓ αTWe␈α⊃now␈α⊃subject␈α⊃␈↓↓B(s,p)␈↓␈α⊃to␈α⊃certain␈α⊃criteria;␈α⊃i.e.␈α⊃β␈↓↓(B,W)␈↓␈α⊃is␈α⊃considered␈α⊃true␈α⊃provided␈α⊃the
␈↓ α∧␈↓following conditions are satisfied:
␈↓ α∧␈↓␈↓ β$1.1.␈αThe␈αset␈α␈↓↓Bel(s)␈↓␈αof␈αbeliefs,␈αi.e.␈αthe␈αset␈αof␈α␈↓↓p␈↓'s␈αfor␈αwhich␈α␈↓↓B(s,p)␈↓␈αis␈αassigned␈αtrue␈αwhen
␈↓ α∧␈↓␈↓↓M␈↓ is in state ␈↓↓s␈↓ contains sufficiently "obvious" consequences of some of its members.
␈↓ α∧␈↓␈↓ β$1.2.␈α ␈↓↓Bel(s)␈↓␈αchanges␈αin␈αa␈αreasonable␈αway␈αwhen␈αthe␈αstate␈αchanges␈αin␈αtime.␈α We␈αlike␈α
new
␈↓ α∧␈↓beliefs␈α∞to␈α∞be␈α
logical␈α∞or␈α∞"plausible"␈α
consequences␈α∞of␈α∞old␈α
ones␈α∞or␈α∞to␈α
come␈α∞in␈α∞as␈α∞␈↓↓communications␈↓␈α
in
␈↓ α∧␈↓some␈α∂language␈α∂on␈α∂the␈α∂input␈α∂lines␈α∂or␈α∂to␈α∂be␈α∂␈↓↓observations␈↓,␈α∂i.e.␈α∂ beliefs␈α∂about␈α∂the␈α∂environment␈α∂the
␈↓ α∧␈↓information␈α∂for␈α∂which␈α⊂comes␈α∂in␈α∂on␈α∂the␈α⊂input␈α∂lines.␈α∂ The␈α∂set␈α⊂of␈α∂beliefs␈α∂should␈α∂not␈α⊂change␈α∂too
␈↓ α∧␈↓rapidly as the state changes with time.
␈↓ α∧␈↓␈↓ β$1.3.␈α∀ We␈α∀prefer␈α∀the␈α∀set␈α∀of␈α∀beliefs␈α∀to␈α∀be␈α∀as␈α∀consistent␈α∀as␈α∀possible.␈α∀ (Admittedly,
␈↓ α∧␈↓consistency␈α
is␈α∞not␈α
a␈α∞quantitative␈α
concept␈α∞in␈α
mathematical␈α∞logic␈α
-␈α∞a␈α
system␈α∞is␈α
either␈α∞consistent␈α
or
␈↓ α∧␈↓not,␈α⊃but␈α⊃it␈α⊃would␈α⊃seem␈α⊃that␈α⊃we␈α⊃will␈α⊃sometimes␈α⊃have␈α⊃to␈α⊃ascribe␈α⊃inconsistent␈α⊃sets␈α⊃of␈α⊃beliefs␈α⊃to
␈↓ α∧␈↓machines␈αand␈αpeople.␈α Our␈αintuition␈αsays␈αthat␈αwe␈αshould␈αbe␈αable␈αto␈αmaintain␈αareas␈αof␈αconsistency
␈↓ α∧␈↓in␈α∞our␈α∞beliefs␈α∞and␈α∞that␈α
it␈α∞may␈α∞be␈α∞especially␈α∞important␈α
to␈α∞avoid␈α∞inconsistencies␈α∞in␈α∞the␈α
machine's
␈↓ α∧␈↓purely analytic beliefs).
␈↓ α∧␈↓␈↓ β$1.4.␈α∞ Our␈α∞criteria␈α∂for␈α∞belief␈α∞systems␈α∂can␈α∞be␈α∞strengthened␈α∞if␈α∂we␈α∞identify␈α∞some␈α∂of␈α∞the
␈↓ α∧␈↓machine's␈α
beliefs␈αas␈α
expressing␈αgoals,␈α
i.e.␈αif␈α
we␈αhave␈α
beliefs␈αof␈α
the␈αform␈α
"It␈αwould␈α
be␈αgood␈α
if␈α...".
␈↓ α∧␈↓Then␈α
we␈α
can␈α
ask␈α
that␈α
the␈α
machine's␈α
behavior␈α
be␈α
somewhat␈α
␈↓↓rational␈↓,␈α
i.e.␈α
␈↓↓it␈α
does␈α
what␈α∞it␈α
believes
␈↓ α∧␈↓↓will␈αachieve␈αits␈αgoals␈↓.␈αThe␈αmore␈αof␈αits␈αbehavior␈αwe␈αcan␈αaccount␈αfor␈αin␈αthis␈αway,␈αthe␈αbetter␈αwe␈αwill
␈↓ α∧␈↓like␈αthe␈αfunction␈α␈↓↓B(s,p)␈↓.␈α We␈αalso␈αwould␈αlike␈αto␈αregard␈αinternal␈αstate␈αchanges␈αas␈αchanges␈αin␈αbelief
␈↓ α∧␈↓in so far as this is reasonable.
␈↓ α∧␈↓␈↓ ε|7␈↓ ∧
␈↓ α∧␈↓␈↓ β$1.5.␈α
If␈α
the␈α
machine␈α
communicates,␈α
i.e.␈α
emits␈α
sentences␈α
in␈α
some␈α
language␈α
that␈α
can␈α
be
␈↓ α∧␈↓interpreted␈α∞as␈α∂assertions,␈α∞questions␈α∞and␈α∂commands,␈α∞we␈α∞will␈α∂want␈α∞the␈α∞assertions␈α∂to␈α∞be␈α∂among␈α∞its
␈↓ α∧␈↓beliefs␈α⊂unless␈α⊃we␈α⊂are␈α⊃ascribing␈α⊂to␈α⊂it␈α⊃a␈α⊂goal␈α⊃or␈α⊂subgoal␈α⊂that␈α⊃involves␈α⊂lying.␈α⊃ We␈α⊂will␈α⊃be␈α⊂most
␈↓ α∧␈↓satisfied␈α∂with␈α∞our␈α∂belief␈α∞ascription,␈α∂if␈α∂we␈α∞can␈α∂account␈α∞for␈α∂its␈α∞communications␈α∂as␈α∂furthering␈α∞the
␈↓ α∧␈↓goals we are ascribing.
␈↓ α∧␈↓␈↓ β$1.6.␈α Sometimes␈αwe␈αshall␈αwant␈αto␈αascribe␈αintrospective␈αbeliefs,␈αe.g.␈αa␈αbelief␈αthat␈αit␈αdoes
␈↓ α∧␈↓not know how to fly to Boston or even that it doesn't know what it wants in a certain situation.
␈↓ α∧␈↓␈↓ β$1.7.␈α
Finally,␈α∞we␈α
will␈α
prefer␈α∞a␈α
more␈α
economical␈α∞ascription␈α
␈↓↓B␈↓␈α
to␈α∞a␈α
less␈α∞economical␈α
one.
␈↓ α∧␈↓The␈α
fewer␈α
beliefs␈αwe␈α
ascribe␈α
and␈αthe␈α
less␈α
they␈αchange␈α
with␈α
state␈αconsistent␈α
with␈α
accounting␈αfor␈α
the
␈↓ α∧␈↓behavior and the internal state changes, the better we will like it.
␈↓ α∧␈↓␈↓ αTThe␈α
above␈α
criteria␈α
have␈α
been␈α
formulated␈α
somewhat␈α
vaguely.␈α
This␈α
would␈α
be␈α
bad␈α∞if␈α
there
␈↓ α∧␈↓were␈αwidely␈α
different␈αascriptions␈α
of␈αbeliefs␈α
to␈αa␈αparticular␈α
machine␈αthat␈α
all␈αmet␈α
our␈αcriteria␈α
or␈αif
␈↓ α∧␈↓the␈α∞criteria␈α∞allowed␈α∞ascriptions␈α∞that␈α∞differed␈α∞widely␈α∞from␈α∞our␈α∞intuitions.␈α∞ My␈α∞present␈α∞opinion␈α
is
␈↓ α∧␈↓that␈α∞more␈α∞thought␈α
will␈α∞make␈α∞the␈α
criteria␈α∞somewhat␈α∞more␈α
precise␈α∞at␈α∞no␈α
cost␈α∞in␈α∞applicability,␈α
but
␈↓ α∧␈↓that␈αthey␈α␈↓↓should␈↓␈αstill␈αremain␈αrather␈αvague,␈αi.e.␈αwe␈αshall␈αwant␈αto␈αascribe␈αbelief␈αin␈αa␈α␈↓↓family␈↓␈αof␈αcases.
␈↓ α∧␈↓However,␈α⊃even␈α⊃at␈α⊃the␈α⊃present␈α⊃level␈α⊃of␈α⊃vagueness,␈α⊃there␈α⊃probably␈α⊃won't␈α⊃be␈α∩radically␈α⊃different
␈↓ α∧␈↓equally␈α
"good"␈αascriptions␈α
of␈αbelief␈α
for␈α
systems␈αof␈α
practical␈αinterest.␈α
If␈α
there␈αwere,␈α
we␈αwould␈α
notice
␈↓ α∧␈↓unresolvable ambiguities in our ascriptions of belief to our acquaintances.
␈↓ α∧␈↓␈↓ αTWhile␈αwe␈αmay␈α
not␈αwant␈αto␈α
pin␈αdown␈αour␈α
general␈αidea␈αof␈α
belief␈αto␈αa␈α
single␈αaxiomatization,
␈↓ α∧␈↓we␈α
will␈αneed␈α
to␈αbuild␈α
precise␈α
axiomatizations␈αof␈α
belief␈αand␈α
other␈αmental␈α
qualities␈α
into␈αparticular
␈↓ α∧␈↓intelligent computer programs.
␈↓ α∧␈↓2. ␈↓αDefinitions relative to an approximate theory␈↓.
␈↓ α∧␈↓␈↓ αTCertain␈αconcepts,␈αe.g.␈α
␈↓↓X␈αcan␈αdo␈α
Y␈↓,␈αare␈αmeaningful␈αas␈α
statements␈αin␈αrather␈α
complex␈αtheories.
␈↓ α∧␈↓For␈α⊂example,␈α⊂suppose␈α⊂we␈α⊂denote␈α∂the␈α⊂state␈α⊂of␈α⊂the␈α⊂world␈α∂by␈α⊂␈↓↓s␈↓,␈α⊂and␈α⊂suppose␈α⊂we␈α⊂have␈α∂functions
␈↓ α∧␈↓␈↓↓f␈↓β1␈↓↓(s)␈↓,...,␈↓↓f␈↓βn␈↓↓(s)␈↓␈α∀that␈α∪are␈α∀directly␈α∪or␈α∀indirectly␈α∀observable.␈α∪ Suppose␈α∀further␈α∪that␈α∀␈↓↓F(s)␈↓␈α∀is␈α∪another
␈↓ α∧␈↓function of the world-state but that we can approximate it by
␈↓ α∧␈↓␈↓ αT␈↓↓F"(s) = F'(f␈↓β1␈↓↓(s),...,f␈↓βn␈↓↓(s))␈↓.
␈↓ α∧␈↓␈↓ αTNow␈αconsider␈αthe␈αcounterfactual␈αconditional␈αsentence,␈α"If␈α␈↓↓f␈↓β2␈↓↓(s)␈↓␈αwere␈α4,␈αthen␈α␈↓↓F(s)␈↓␈αwould␈αbe␈α3
␈↓ α∧␈↓-␈α∞calling␈α∞the␈α∞present␈α∞state␈α
of␈α∞the␈α∞world␈α∞␈↓↓s␈↓β0␈↓."␈α∞By␈α∞itself,␈α
this␈α∞sentence␈α∞has␈α∞no␈α∞meaning,␈α∞because␈α
no
␈↓ α∧␈↓definite␈α∂state␈α∂␈↓↓s␈↓␈α∂of␈α∂the␈α∂world␈α∂is␈α⊂specified␈α∂by␈α∂the␈α∂condition.␈α∂ However,␈α∂in␈α∂the␈α∂framework␈α⊂of␈α∂the
␈↓ α∧␈↓functions␈α∂␈↓↓f␈↓β1␈↓↓(s),...,f␈↓βn␈↓↓(s)␈↓␈α∞and␈α∂the␈α∂given␈α∞approximation␈α∂to␈α∞␈↓↓F(s)␈↓,␈α∂the␈α∂assertion␈α∞can␈α∂be␈α∂verified␈α∞by
␈↓ α∧␈↓computing␈α␈↓↓F'␈↓␈αwith␈αall␈αarguments␈αexcept␈αthe␈αsecond␈αhaving␈αthe␈αvalues␈αassociated␈αwith␈αthe␈αstate␈α␈↓↓s␈↓β0␈↓
␈↓ α∧␈↓of the world.
␈↓ α∧␈↓␈↓ αTThis gives rise to some remarks:
␈↓ α∧␈↓␈↓ β$2.1.␈α∩The␈α∩most␈α∩straightforward␈α∩case␈α∩of␈α∩counterfactuals␈α∩arises␈α∩when␈α∩the␈α∩state␈α∩of␈α∩a
␈↓ α∧␈↓phenomenon␈αhas␈αa␈αdistinguished␈αCartesian␈αproduct␈αstructure.␈α Then␈αthe␈αmeaning␈αof␈αa␈αchange␈αof
␈↓ α∧␈↓one␈αcomponent␈αwithout␈αchanging␈αthe␈αothers␈αis␈αquite␈αclear.␈α Changes␈αof␈αmore␈αthan␈αone␈αcomponent
␈↓ α∧␈↓␈↓ ε|8␈↓ ∧
␈↓ α∧␈↓also␈α∩have␈α∩definite␈α∩meanings.␈α∩ This␈α∩is␈α∩a␈α∩stronger␈α∩structure␈α∩than␈α∩the␈α∩␈↓↓possible␈α∩worlds␈↓␈α⊃structure
␈↓ α∧␈↓discussed in (Lewis 1973).
␈↓ α∧␈↓␈↓ β$2.2.␈αThe␈αusual␈αcase␈αis␈αone␈αin␈αwhich␈αthe␈αstate␈α␈↓↓s␈↓␈αis␈αa␈αsubstantially␈αunknown␈αentity␈αand
␈↓ α∧␈↓the␈αform␈αof␈αthe␈αfunction␈α␈↓↓F␈↓␈αis␈αalso␈αunknown,␈αbut␈αthe␈αvalues␈αof␈α␈↓↓f␈↓β1␈↓↓(s),...,f␈↓βn␈↓↓(s)␈↓␈αand␈αthe␈αfunction␈α␈↓↓F'␈↓
␈↓ α∧␈↓are␈αmuch␈αbetter␈αknown.␈α Suppose␈αfurther␈αthat␈α␈↓↓F"(s)␈↓␈αis␈αknown␈αto␈αbe␈αonly␈αa␈αfair␈αapproximation␈αto
␈↓ α∧␈↓␈↓↓F(s)␈↓.␈α
We␈αnow␈α
have␈α
a␈αsituation␈α
in␈α
which␈αthe␈α
counterfactual␈α
conditional␈αstatement␈α
is␈αmeaningful␈α
as
␈↓ α∧␈↓long␈αas␈αit␈αis␈αnot␈αexamined␈αtoo␈αclosely,␈αi.e.␈αas␈αlong␈αas␈αwe␈αare␈αthinking␈αof␈αthe␈αworld␈αin␈αterms␈αof␈αthe
␈↓ α∧␈↓values␈α
of␈α
␈↓↓f␈↓β1␈↓↓,...,f␈↓βn␈↓,␈α
but␈α
when␈α
we␈α
go␈α
beyond␈α
the␈α
approximate␈α
theory,␈α
the␈α
whole␈α
meaning␈α∞of␈α
the
␈↓ α∧␈↓sentence seems to disintegrate.
␈↓ α∧␈↓␈↓ αTOur␈α
idea␈α
is␈α
that␈α
this␈α
is␈α
a␈α
very␈α
common␈α
phenomenon.␈α
In␈α
particular␈α
it␈α
applies␈α
to␈α
statements␈α
of
␈↓ α∧␈↓the␈αform␈α␈↓↓"X␈αcan␈αdo␈αY"␈↓.␈α
Such␈αstatements␈αcan␈αbe␈αgiven␈αa␈α
precise␈αmeaning␈αin␈αterms␈αof␈αa␈α
system␈αof
␈↓ α∧␈↓interacting␈α⊂automata␈α⊃as␈α⊂is␈α⊃discussed␈α⊂in␈α⊃detail␈α⊂in␈α⊃(McCarthy␈α⊂and␈α⊃Hayes␈α⊂1970).␈α⊃ We␈α⊂determine
␈↓ α∧␈↓whether␈αAutomaton␈α1␈αcan␈αput␈αAutomaton␈α3␈αin␈αstate␈α5␈αat␈αtime␈α10␈αby␈αanswering␈αa␈α
question␈αabout
␈↓ α∧␈↓an␈αautomaton␈α
system␈αin␈α
which␈αthe␈α
outputs␈αfrom␈α
Automaton␈α1␈α
are␈αreplaced␈α
by␈αinputs␈αfrom␈α
outside
␈↓ α∧␈↓the␈α
system.␈α
Namely,␈α
we␈αask␈α
whether␈α
there␈α
is␈αa␈α
sequence␈α
of␈α
inputs␈αto␈α
the␈α
new␈α
system␈α
that␈α␈↓↓would␈↓
␈↓ α∧␈↓put␈αAutomaton␈α3␈αin␈αstate␈α5␈αat␈αtime␈α10;␈αif␈αyes,␈αwe␈αsay␈αthat␈αAutomaton␈α1␈α␈↓↓could␈↓␈αdo␈αit␈αin␈αthe␈αoriginal
␈↓ α∧␈↓system␈α∂even␈α∞though␈α∂we␈α∞may␈α∂be␈α∞able␈α∂to␈α∞show␈α∂that␈α∞it␈α∂won't␈α∞emit␈α∂the␈α∞necessary␈α∂outputs.␈α∂ In␈α∞that
␈↓ α∧␈↓paper, we argue that this definition corresponds to the intuitive notion of ␈↓↓X can do Y.␈↓.
␈↓ α∧␈↓␈↓ αTWhat␈α
was␈α
not␈αnoted␈α
in␈α
that␈α
paper␈αis␈α
that␈α
modelling␈α
the␈αsituation␈α
by␈α
the␈α
particular␈αsystem␈α
of
␈↓ α∧␈↓interacting␈α⊃automata␈α⊃is␈α⊃an␈α⊃approximation,␈α⊃and␈α⊃the␈α⊃sentences␈α⊃involving␈α⊃␈↓↓can␈↓␈α⊃derived␈α⊃from␈α⊂the
␈↓ α∧␈↓approximation cannot necessarily be translated into single assertions about the real world.
␈↓ α∧␈↓␈↓ αTI␈α
contend␈α
that␈α
the␈α
statement,␈α
␈↓↓"I␈α
can␈α
go␈α
skiing␈α
tomorrow,␈α
but␈α
I␈α
don't␈α
intend␈α
to,␈α
because␈α
I␈α
want
␈↓ α∧␈↓↓to finish this paper"␈↓ has the following properties:
␈↓ α∧␈↓␈↓ αT1.␈αIt␈α
has␈αa␈α
precise␈αmeaning␈α
in␈αa␈αcertain␈α
approximate␈αtheory␈α
of␈αthe␈α
world␈αin␈α
which␈αI␈αand␈α
my
␈↓ α∧␈↓environment are considered as collections of interacting automata.
␈↓ α∧␈↓␈↓ αT2.␈αIt␈αcannot␈αbe␈αdirectly␈αinterpreted␈αas␈αa␈αstatement␈αabout␈αthe␈αworld␈αitself,␈αbecause␈αit␈αcan't␈αbe
␈↓ α∧␈↓stated␈α∞in␈α∞what␈α
total␈α∞configurations␈α∞of␈α
the␈α∞world␈α∞the␈α
success␈α∞of␈α∞my␈α
attempt␈α∞to␈α∞go␈α
skiing␈α∞is␈α∞to␈α
be
␈↓ α∧␈↓validated.
␈↓ α∧␈↓␈↓ αT3.␈α∃The␈α∀approximate␈α∃theory␈α∀within␈α∃which␈α∀the␈α∃statement␈α∀is␈α∃meaningful␈α∀may␈α∃have␈α∀an
␈↓ α∧␈↓objectively␈α
preferred␈αstatus␈α
in␈α
that␈αit␈α
may␈αbe␈α
the␈α
only␈αtheory␈α
not␈αenormously␈α
more␈α
complex␈αthat
␈↓ α∧␈↓enables my actions and mental states to be predicted.
␈↓ α∧␈↓␈↓ αT4. The statement may convey useful information.
␈↓ α∧␈↓Our␈α⊃conclusion␈α⊃is␈α∩that␈α⊃the␈α⊃statement␈α∩is␈α⊃␈↓αtrue␈↓,␈α⊃but␈α∩in␈α⊃a␈α⊃sense␈α∩that␈α⊃depends␈α⊃essentially␈α∩on␈α⊃the
␈↓ α∧␈↓approximate␈α∂theory,␈α∞and␈α∂that␈α∂this␈α∞intellectual␈α∂situation␈α∞is␈α∂normal␈α∂and␈α∞should␈α∂be␈α∂accepted.␈α∞ We
␈↓ α∧␈↓further␈α∞conjecture␈α∂that␈α∞the␈α∂old-fashioned␈α∞common-sense␈α∞analysis␈α∂of␈α∞a␈α∂personality␈α∞into␈α∂␈↓↓will␈↓␈α∞and
␈↓ α∧␈↓␈↓↓intellect␈↓␈α
and␈α
other␈α
components␈α
may␈αbe␈α
valid␈α
and␈α
might␈α
be␈αput␈α
on␈α
a␈α
precise␈α
scientific␈αfooting␈α
using
␈↓ α∧␈↓␈↓↓definitions relative to approximate theories␈↓.
␈↓ α∧␈↓␈↓ ε|9␈↓ ∧
␈↓ α∧␈↓␈↓ αTIf,␈α
as␈α∞we␈α
conjecture,␈α∞most␈α
common␈α∞sense␈α
and␈α
even␈α∞scientific␈α
terms␈α∞are␈α
meaningful␈α∞only␈α
in
␈↓ α∧␈↓approximate␈α∞theories,␈α∞then␈α∞a␈α∞philosophical␈α∞method␈α
as␈α∞old␈α∞as␈α∞Socrates␈α∞needs␈α∞to␈α∞be␈α
re-examined.
␈↓ α∧␈↓This␈α∞method␈α
involves␈α∞attacking␈α∞a␈α
common␈α∞sense␈α
notion␈α∞by␈α∞introducing␈α
examples␈α∞that␈α∞have␈α
not
␈↓ α∧␈↓been␈αencountered␈αin␈αthe␈αprevious␈αusage␈αof␈αthe␈αnotion␈αand␈αshowing␈αthat␈αin␈αthese␈αcases␈αthe␈αnotion
␈↓ α∧␈↓gives␈αunacceptable␈αresults.␈α But␈αit␈αmay␈αbe␈αthat␈αany␈αtheory␈αof␈α␈↓↓just␈↓␈αor␈α␈↓↓unjust␈↓␈αactions␈αmust␈αbe␈αbased
␈↓ α∧␈↓on␈α
an␈α
approximate␈αmodel␈α
of␈α
the␈α
world,␈αand␈α
anomalies␈α
can␈αalways␈α
be␈α
found.␈α
The␈αphilosophical
␈↓ α∧␈↓analysis␈α
cannot␈α
invalidate␈α
a␈α
notion␈α
by␈α
finding␈α
limits␈α
on␈α
its␈α
applicability;␈α
that␈α
requires␈α
find␈α
a␈α
better
␈↓ α∧␈↓and␈α
more␈αgeneral␈α
notion.␈α
This␈αmay␈α
not␈αalways␈α
be␈α
possible,␈αand␈α
even␈α
when␈αfound,␈α
the␈αnew␈α
notion
␈↓ α∧␈↓will␈α
still␈αbe␈α
limited.␈α For␈α
example,␈αthere␈α
may␈αnever␈α
be␈αa␈α
notion␈αof␈α
just␈αactions␈α
securely␈αfounded␈α
on
␈↓ α∧␈↓quantum␈α
physics␈α
and␈α
chemistry.␈α
Moreover,␈αthis␈α
conclusion␈α
does␈α
not␈α
depend␈α
on␈αany␈α
considerations
␈↓ α∧␈↓of ␈↓↓emergent␈↓ ␈↓↓phenomena.␈↓
␈↓ α∧␈↓␈↓ εu10␈↓ ∧
␈↓ α∧␈↓α␈↓ βxEXAMPLES OF SYSTEMS WITH MENTAL QUALITIES
␈↓ α∧␈↓␈↓ αTLet␈α
us␈α
consider␈α
some␈αexamples␈α
of␈α
machines␈α
and␈α
programs␈αto␈α
which␈α
we␈α
may␈α
ascribe␈αbelief
␈↓ α∧␈↓and goal structures.
␈↓ α∧␈↓␈↓ αT1.␈α⊂ ␈↓αThermostats.␈↓␈α⊂Ascribing␈α⊂beliefs␈α⊂to␈α⊂simple␈α⊂thermostats␈α⊂is␈α⊂unnecessary␈α⊂for␈α⊂the␈α⊂study␈α∂of
␈↓ α∧␈↓thermostats,␈α⊃because␈α∩their␈α⊃operation␈α∩can␈α⊃be␈α⊃well␈α∩understood␈α⊃without␈α∩it.␈α⊃ However,␈α∩their␈α⊃very
␈↓ α∧␈↓simplicity␈α⊃makes␈α⊃it␈α∩clearer␈α⊃what␈α⊃is␈α⊃involved␈α∩in␈α⊃the␈α⊃ascription,␈α⊃and␈α∩we␈α⊃maintain␈α⊃(partly␈α∩as␈α⊃a
␈↓ α∧␈↓provocation␈αto␈αthose␈αwho␈αregard␈αattribution␈αof␈αbeliefs␈αto␈αmachines␈αas␈αmere␈αintellectual␈αsloppiness)
␈↓ α∧␈↓that the ascription is legitimate.␈↓∧6␈↓
␈↓ α∧␈↓␈↓ αTFirst␈αconsider␈αa␈αsimple␈αthermostat␈αthat␈αturns␈αoff␈αthe␈αheat␈αwhen␈αthe␈αtemperature␈αis␈αa␈αdegree
␈↓ α∧␈↓above␈αthe␈αtemperature␈αset␈αon␈αthe␈αthermostat,␈αturns␈αon␈αthe␈αheat␈αwhen␈αthe␈αtemperature␈αis␈αa␈αdegree
␈↓ α∧␈↓below␈α
the␈α
desired␈α
temperature,␈α
and␈α
leaves␈α
the␈α
heat␈α
as␈α
is␈α
when␈α
the␈α
temperature␈α
is␈α
in␈α
the␈α
two␈α
degree
␈↓ α∧␈↓range␈αaround␈αthe␈αdesired␈αtemperature.␈αThe␈αsimplest␈αbelief␈αpredicate␈α␈↓↓B(s,p)␈↓␈αascribes␈αbelief␈αto␈αonly
␈↓ α∧␈↓three␈α∂sentences:␈α∂"The␈α∂room␈α∂is␈α∂too␈α∂cold",␈α∂"The␈α∂room␈α∂is␈α∂too␈α∂hot",␈α∂and␈α∂"The␈α∂room␈α∂is␈α∂OK"␈α∂-␈α∞the
␈↓ α∧␈↓beliefs␈α⊃being␈α⊃assigned␈α⊃to␈α⊃states␈α⊃of␈α⊃the␈α⊃thermostat␈α⊃in␈α⊃the␈α⊃obvious␈α⊃way.␈α⊃ When␈α∩the␈α⊃thermostat
␈↓ α∧␈↓believes␈α
the␈α∞room␈α
is␈α
too␈α∞cold␈α
or␈α
too␈α∞hot,␈α
it␈α∞sends␈α
a␈α
message␈α∞saying␈α
so␈α
to␈α∞the␈α
furnace.␈α∞A␈α
slightly
␈↓ α∧␈↓more␈αcomplex␈αbelief␈α
predicate␈αcould␈αalso␈α
be␈αused␈αin␈α
which␈αthe␈αthermostat␈α
has␈αa␈αbelief␈αabout␈α
what
␈↓ α∧␈↓the␈αtemperature␈αshould␈αbe␈αand␈αanother␈αbelief␈αabout␈αwhat␈αit␈αis.␈α It␈αis␈αnot␈αclear␈αwhich␈αis␈αbetter,␈αbut
␈↓ α∧␈↓if␈αwe␈αwished␈α
to␈αconsider␈αpossible␈αerrors␈α
in␈αthe␈αthermometer,␈αthen␈α
we␈αwould␈αascribe␈α
beliefs␈αabout
␈↓ α∧␈↓what␈αthe␈α
temperature␈αis.␈αWe␈α
do␈αnot␈αascribe␈α
to␈αit␈αany␈α
other␈αbeliefs;␈αit␈α
has␈αno␈αopinion␈α
even␈αabout
␈↓ α∧␈↓whether␈α∞the␈α∞heat␈α∞is␈α∂on␈α∞or␈α∞off␈α∞or␈α∂about␈α∞the␈α∞weather␈α∞or␈α∞about␈α∂who␈α∞won␈α∞the␈α∞battle␈α∂of␈α∞Waterloo.
␈↓ α∧␈↓Moreover, it has no introspective beliefs, i.e. it doesn't believe that it believes the room is too hot.
␈↓ α∧␈↓␈↓ αTThe␈α⊃temperature␈α⊂control␈α⊃system␈α⊂in␈α⊃my␈α⊃house␈α⊂may␈α⊃be␈α⊂described␈α⊃as␈α⊃follows:␈α⊂Thermostats
␈↓ α∧␈↓upstairs␈αand␈α
downstairs␈αtell␈α
the␈αcentral␈α
system␈αto␈αturn␈α
on␈αor␈α
shut␈αoff␈α
hot␈αwater␈α
flow␈αto␈αthese␈α
areas.
␈↓ α∧␈↓A␈α
central␈αwater-temperature␈α
thermostat␈α
tells␈αthe␈α
furnace␈α
to␈αturn␈α
on␈α
or␈αoff␈α
thus␈α
keeping␈αthe␈α
central
␈↓ α∧␈↓hot␈α∞water␈α∞reservoir␈α∂at␈α∞the␈α∞right␈α∞temperture.␈α∂ Recently␈α∞it␈α∞was␈α∞too␈α∂hot␈α∞upstairs,␈α∞and␈α∂the␈α∞question
␈↓ α∧␈↓arose␈αas␈αto␈αwhether␈αthe␈αupstairs␈αthermostat␈αmistakenly␈α␈↓↓believed␈↓␈αit␈αwas␈αtoo␈αcold␈αupstairs␈αor␈α
whether
␈↓ α∧␈↓the␈α∂furnace␈α∂thermostat␈α∂mistakenly␈α∂␈↓↓believed␈α∂␈↓␈α∂the␈α∂water␈α∂was␈α∂too␈α∂cold.␈α∂ It␈α∂turned␈α∂out␈α∂that␈α∞neither
␈↓ α∧␈↓mistake␈α⊂was␈α⊃made;␈α⊂the␈α⊃downstairs␈α⊂controller␈α⊃␈↓↓tried␈↓␈α⊂to␈α⊃turn␈α⊂off␈α⊃the␈α⊂flow␈α⊃of␈α⊂water␈α⊃but␈α⊂␈↓↓couldn't␈↓,
␈↓ α∧␈↓because␈α∞the␈α∞valve␈α∂was␈α∞stuck.␈α∞ The␈α∞plumber␈α∂came␈α∞once␈α∞and␈α∞found␈α∂the␈α∞trouble,␈α∞and␈α∂came␈α∞again
␈↓ α∧␈↓when␈α
a␈αreplacement␈α
valve␈α
was␈αordered.␈α
Since␈αthe␈α
services␈α
of␈αplumbers␈α
are␈αincreasingly␈α
expensive,
␈↓ α∧␈↓and␈αmicrocomputers␈αare␈αincreasingly␈αcheap,␈αone␈αis␈αled␈αto␈αdesign␈αa␈αtemperature␈αcontrol␈αsystem␈αthat
␈↓ α∧␈↓would ␈↓↓know␈↓ a lot more about the thermal state of the house and its own state of health.
␈↓ α∧␈↓␈↓ αTIn␈αthe␈αfirst␈αplace,␈αwhile␈αthe␈αsystem␈α␈↓↓couldn't␈↓␈αturn␈αoff␈αthe␈αflow␈αof␈αhot␈αwater␈αupstairs,␈αthere␈αis
␈↓ α∧␈↓no␈α⊃reason␈α⊃to␈α⊃ascribe␈α⊃to␈α⊃it␈α⊃the␈α⊃␈↓↓knowledge␈↓␈α⊃that␈α⊃it␈α⊃couldn't,␈α⊃and␈α⊃␈↓↓a␈α⊃fortiori␈↓␈α⊃it␈α⊃had␈α⊃no␈α∩ability␈α⊃to
␈↓ α∧␈↓␈↓↓communicate␈↓␈α
this␈α
␈↓↓fact␈↓␈αor␈α
to␈α
take␈α
it␈αinto␈α
account␈α
in␈α
controlling␈αthe␈α
system.␈α
A␈α
more␈αadvanced␈α
system
␈↓ α∧␈↓would␈α
know␈α
whether␈α
the␈α∞␈↓↓actions␈↓␈α
it␈α
␈↓↓attempted␈↓␈α
succeeded,␈α∞and␈α
it␈α
would␈α
communicate␈α∞failures␈α
and
␈↓ α∧␈↓adapt␈αto␈αthem.␈α (We␈αadapted␈αto␈αthe␈αfailure␈αby␈αturning␈αoff␈αthe␈αwhole␈αsystem␈αuntil␈αthe␈αwhole␈αhouse
␈↓ α∧␈↓cooled␈αoff␈αand␈αthen␈αletting␈αthe␈αtwo␈αparts␈αwarm␈αup␈αtogether.␈α The␈αpresent␈αsystem␈αhas␈αthe␈α␈↓↓physical
␈↓ α∧␈↓↓capability␈↓ of doing this even if it hasn't the ␈↓↓knowledge␈↓ or the ␈↓↓will␈↓.
␈↓ α∧␈↓␈↓ εu11␈↓ ∧
␈↓ α∧␈↓␈↓ αT2.␈α
␈↓αSelf-reproducing␈α∞intelligent␈α
configurations␈α∞in␈α
a␈α
cellular␈α∞automaton␈α
world.␈↓␈α∞A␈α
␈↓↓cellular␈↓
␈↓ α∧␈↓␈↓↓automaton␈↓␈αsystem␈αassigns␈αto␈αeach␈αvertex␈αin␈αa␈αcertain␈αgraph␈αa␈αfinite␈αautomaton.␈α The␈αstate␈αof␈αeach
␈↓ α∧␈↓automaton␈αat␈αtime␈α␈↓↓t+1␈↓␈αdepends␈αon␈αits␈αstate␈αat␈αtime␈α␈↓↓t␈↓␈αand␈αthe␈αstates␈αof␈αits␈αneighbors␈αat␈αtime␈α␈↓↓t␈↓.␈αThe
␈↓ α∧␈↓most␈αcommon␈α
graph␈αis␈α
the␈αarray␈α
of␈αpoints␈α
␈↓↓(x,y)␈↓␈αin␈α
the␈αplane␈α
with␈αinteger␈α
co-ordinates␈α␈↓↓x␈↓␈α
and␈α␈↓↓y.␈↓
␈↓ α∧␈↓The␈αfirst␈α
use␈αof␈αcellular␈α
automata␈αwas␈α
by␈αvon␈αNeumann␈α
(196?)␈αwho␈αfound␈α
a␈α27␈α
state␈αautomaton
␈↓ α∧␈↓that␈αcould␈αbe␈αused␈αto␈αconstruct␈αself-reproducing␈αconfiguration␈αthat␈αwere␈αalso␈αuniversal␈αcomputers.
␈↓ α∧␈↓The␈αbasic␈αautomaton␈αin␈αvon␈αNeumann's␈αsystem␈αhad␈αa␈αdistinguished␈αstate␈αcalled␈α0␈αand␈αa␈αpoint␈αin
␈↓ α∧␈↓state␈α∩0␈α∪whose␈α∩four␈α∩neighbors␈α∪were␈α∩also␈α∪in␈α∩that␈α∩state␈α∪would␈α∩remain␈α∩in␈α∪state␈α∩0.␈α∪ The␈α∩initial
␈↓ α∧␈↓configurations␈α⊂considered␈α⊂had␈α⊂all␈α⊂but␈α⊂a␈α⊂finite␈α⊂number␈α⊂of␈α⊂cells␈α⊂in␈α⊂state␈α⊂0,␈α⊂and,␈α⊂of␈α⊃course,␈α⊂this
␈↓ α∧␈↓property would persist although the number of non-zero cells might grow indefinitely with time.
␈↓ α∧␈↓␈↓ αTThe␈α∂self-reproducing␈α∂system␈α∞used␈α∂the␈α∂states␈α∞of␈α∂a␈α∂long␈α∞strip␈α∂of␈α∂non-zero␈α∞cells␈α∂as␈α∂a␈α∞"tape"
␈↓ α∧␈↓containing␈α
instructions␈α∞to␈α
a␈α∞"universal␈α
constructor"␈α
configuration␈α∞that␈α
would␈α∞construct␈α
a␈α∞copy␈α
of
␈↓ α∧␈↓the␈αconfiguration␈αto␈αbe␈αreproduced␈αbut␈αwith␈αeach␈αcell␈αin␈αa␈αpassive␈αstate␈αthat␈αwould␈αpersist␈αas␈α
long
␈↓ α∧␈↓as␈α⊂its␈α⊂neighbors␈α⊂were␈α⊂also␈α⊂in␈α⊂passive␈α⊂states.␈α⊂ After␈α⊂the␈α⊂construction␈α⊂phase,␈α⊂the␈α⊂tape␈α⊃would␈α⊂be
␈↓ α∧␈↓copied␈αto␈α
make␈αthe␈αtape␈α
for␈αthe␈αnew␈α
machine,␈αand␈α
then␈αthe␈αnew␈α
system␈αwould␈αbe␈α
set␈αin␈αmotion␈α
by
␈↓ α∧␈↓activating␈α
one␈αof␈α
its␈αcells.␈α
The␈α
new␈αsystem␈α
would␈αthen␈α
move␈α
away␈αfrom␈α
its␈αmother␈α
and␈αthe␈α
process
␈↓ α∧␈↓would␈α∀start␈α∪over.␈α∀ The␈α∪purpose␈α∀of␈α∪the␈α∀design␈α∪was␈α∀to␈α∪demonstrate␈α∀that␈α∀arbitrarily␈α∪complex
␈↓ α∧␈↓configurations␈α∞could␈α∂be␈α∞self-reproducing␈α∂-␈α∞the␈α∞complexity␈α∂being␈α∞assured␈α∂by␈α∞also␈α∂requiring␈α∞that
␈↓ α∧␈↓they be universal computers.
␈↓ α∧␈↓␈↓ αTSince␈α_von␈α↔Neumann's␈α_time,␈α↔simpler␈α_basic␈α↔cells␈α_admitting␈α_self-reproducing␈α↔universal
␈↓ α∧␈↓computers␈α∞have␈α∞been␈α∞discovered.␈α∂ The␈α∞simplest␈α∞so␈α∞far␈α∞is␈α∂the␈α∞two␈α∞state␈α∞Life␈α∞automaton␈α∂of␈α∞John
␈↓ α∧␈↓Conway␈α(196?).␈α
The␈αstate␈α
of␈αa␈α
cell␈αat␈α
time␈α␈↓↓t+1␈↓␈α
is␈αdetermined␈α
its␈αstate␈α
at␈αtime␈α
␈↓↓t␈↓␈αand␈α
the␈αstates␈αof␈α
its
␈↓ α∧␈↓eight␈α
neighbors␈α
at␈α
time␈α
␈↓↓t.␈↓␈α
Namely,␈α
a␈α
point␈α
whose␈α
state␈α
is␈α
0␈α
will␈α
change␈α
to␈α
state␈α
1␈α
if␈α
exactly␈α
three␈α
of
␈↓ α∧␈↓its␈αneighbors␈αare␈αin␈αstate␈α1.␈α A␈αpoint␈αwhose␈αstate␈αis␈α1␈αwill␈αremain␈αin␈αstate␈α1␈αif␈αtwo␈αor␈αthree␈αof␈αits
␈↓ α∧␈↓neighbors are in state 1. In all other cases the state becomes or remains 0.
␈↓ α∧␈↓␈↓ αTConway's␈αinitial␈αintent␈αwas␈αto␈αmodel␈αa␈αbirth␈αand␈αdeath␈αprocess␈αwhereby␈αa␈αcell␈αis␈αborn␈α(goes
␈↓ α∧␈↓into␈αstate␈α1)␈αif␈αit␈αhas␈α
the␈αright␈αnumber␈αof␈αliving␈αneighbors␈α
(namely␈αthree)␈αand␈αdies␈αif␈αit␈α
is␈αeither
␈↓ α∧␈↓too␈α⊂lonely␈α⊂(has␈α⊂none␈α⊂or␈α⊂one␈α⊂neighbor␈α⊂in␈α⊂state␈α⊂1)␈α⊂or␈α⊂is␈α⊂overcrowded␈α⊂(has␈α⊂four␈α⊂or␈α⊂more␈α∂living
␈↓ α∧␈↓neighbors).␈α∞ He␈α∞also␈α∞asked␈α∞whether␈α∞infinitely␈α∞growing␈α∞configurations␈α∞were␈α∞possible,␈α∞and␈α∞Gosper
␈↓ α∧␈↓first␈α
proved␈αthat␈α
there␈αwere.␈α
Surprisingly,␈α
it␈αturned␈α
out␈αthat␈α
self-reproducing␈αuniversal␈α
computers
␈↓ α∧␈↓could be built up as Life configurations.
␈↓ α∧␈↓␈↓ αTConsider␈α⊃a␈α⊂number␈α⊃of␈α⊂such␈α⊃self-reproducing␈α⊂universal␈α⊃computers␈α⊂operating␈α⊃in␈α⊃the␈α⊂Life
␈↓ α∧␈↓plane,␈αand␈α
suppose␈αthat␈α
they␈αhave␈α
been␈αprogrammed␈αto␈α
study␈αthe␈α
properties␈αof␈α
their␈αworld␈αand␈α
to
␈↓ α∧␈↓communicate␈α∩among␈α∩themselves␈α∪about␈α∩it,␈α∩perhaps␈α∪pursuing␈α∩various␈α∩goals␈α∪co-operatively␈α∩and
␈↓ α∧␈↓competitively.␈α∞ Call␈α
these␈α∞configurations␈α∞robots.␈α
In␈α∞some␈α∞respects␈α
their␈α∞intellectual␈α∞and␈α
scientific
␈↓ α∧␈↓problems␈αwill␈αbe␈αlike␈αours,␈αbut␈αin␈αone␈αmajor␈αrespect␈αthey␈αlive␈αin␈αa␈αsimpler␈αworld␈αthan␈αours␈αseems
␈↓ α∧␈↓to␈αbe.␈α Namely,␈αthe␈αfundamental␈αphysics␈αof␈αtheir␈αworld␈αis␈αthat␈αof␈αthe␈αlife␈αautomaton,␈αand␈αthere␈αis
␈↓ α∧␈↓no␈αobstacle␈αto␈αeach␈αrobot␈α␈↓↓knowing␈↓␈αthis␈αphysics,␈αand␈αbeing␈αable␈αto␈αsimulate␈αthe␈αevolution␈αof␈αa␈αlife
␈↓ α∧␈↓configuration␈α
given␈α
the␈α
initial␈α
state.␈α
Moreover,␈α
if␈α
the␈α
initial␈α
state␈α
of␈α
the␈α
robot␈α
world␈α
is␈α∞finite␈α
it
␈↓ α∧␈↓can␈α
have␈αbeen␈α
recorded␈αin␈α
each␈α
robot␈αin␈α
the␈αbeginning␈α
or␈αelse␈α
recorded␈α
on␈αa␈α
strip␈αof␈α
cells␈αthat␈α
the
␈↓ α∧␈↓robots␈α⊂can␈α⊃read.␈α⊂ (The␈α⊂infinite␈α⊃regress␈α⊂of␈α⊂having␈α⊃to␈α⊂describe␈α⊂the␈α⊃description␈α⊂is␈α⊂avoided␈α⊃by␈α⊂a
␈↓ α∧␈↓convention␈α
that␈α
the␈α
description␈α
is␈α
not␈α
separately␈α
described,␈α
but␈α
can␈α
be␈α
read␈α
␈↓↓both␈↓␈α
as␈α
a␈α
description␈α
of
␈↓ α∧␈↓the world ␈↓↓and␈↓ as a description of itself.)
␈↓ α∧␈↓␈↓ εu12␈↓ ∧
␈↓ α∧␈↓␈↓ αTSince␈α∂these␈α∂robots␈α∞know␈α∂the␈α∂initial␈α∞state␈α∂of␈α∂their␈α∞world␈α∂and␈α∂its␈α∞laws␈α∂of␈α∂motion,␈α∂they␈α∞can
␈↓ α∧␈↓simulate␈α∪as␈α∩much␈α∪of␈α∪their␈α∩world's␈α∪history␈α∩as␈α∪they␈α∪want,␈α∩assuming␈α∪that␈α∩each␈α∪can␈α∪grow␈α∩into
␈↓ α∧␈↓unoccupied␈α∞space␈α∞so␈α∞as␈α∞to␈α∞have␈α∞memory␈α∞to␈α∞store␈α∞the␈α∞states␈α∞of␈α∞the␈α∞world␈α∞being␈α∞simulated.␈α
This
␈↓ α∧␈↓simulation␈α
is␈α
necessarily␈α
much␈α
slower␈αthan␈α
real␈α
time,␈α
so␈α
they␈α
can␈αnever␈α
catch␈α
up␈α
with␈α
the␈αpresent␈α
-
␈↓ α∧␈↓let␈α∀alone␈α∀predict␈α∪the␈α∀future.␈α∀ This␈α∪is␈α∀clear␈α∀if␈α∪we␈α∀imagine␈α∀the␈α∪simulation␈α∀carried␈α∀out␈α∀in␈α∪a
␈↓ α∧␈↓straightforwardly␈α
by␈α
updating␈α
a␈α
list␈α
of␈α
currently␈α
active␈α
cells␈α
in␈α
the␈α
simulated␈α
world␈α
according␈αto
␈↓ α∧␈↓the␈αLife␈αrule,␈αbut␈αit␈αalso␈αapplies␈αto␈αany␈αclever␈αmathematical␈αmethod␈αthat␈αmight␈αpredict␈αmillions␈αof
␈↓ α∧␈↓steps␈αahead.␈α (Some␈α
Life␈αconfigurations,␈αe.g.␈αstatic␈α
ones␈αor␈αones␈α
containing␈αsingle␈α␈↓↓gliders␈↓␈αor␈α
␈↓↓cannon␈↓
␈↓ α∧␈↓can␈αhave␈α
their␈αdistant␈α
futures␈αpredicted␈αwith␈α
little␈αcomputing.)␈α
Namely,␈αif␈α
there␈αwere␈αan␈α
algorithm
␈↓ α∧␈↓for␈α
such␈αprediction,␈α
a␈α
robot␈αcould␈α
be␈α
made␈αthat␈α
would␈α
predict␈αits␈α
own␈α
future␈αand␈α
then␈αdisobey␈α
the
␈↓ α∧␈↓prediction.␈α∞ The␈α
detailed␈α∞proof␈α
would␈α∞be␈α∞analogous␈α
to␈α∞the␈α
proof␈α∞of␈α
unsolvability␈α∞of␈α∞the␈α
halting
␈↓ α∧␈↓problem for Turing machines.
␈↓ α∧␈↓␈↓ αTNow␈α
we␈αcome␈α
to␈α
the␈αpoint␈α
of␈α
this␈αlong␈α
disquisition.␈α
Suppose␈αwe␈α
wish␈α
to␈αprogram␈α
a␈αrobot␈α
to
␈↓ α∧␈↓be␈αsuccessful␈αin␈αthe␈αLife␈αworld␈αin␈αcompetition␈αor␈αco-operation␈αwith␈αthe␈αothers.␈α Without␈αany␈αidea
␈↓ α∧␈↓of␈αhow␈αto␈αgive␈αa␈αmathematical␈αproof,␈αI␈αwill␈αclaim␈αthat␈αour␈αrobot␈αwill␈αneed␈αprograms␈α
that␈αascribe
␈↓ α∧␈↓purposes␈α∂and␈α∂beliefs␈α∞to␈α∂its␈α∂fellow␈α∞robots␈α∂and␈α∂predict␈α∂how␈α∞they␈α∂will␈α∂react␈α∞to␈α∂its␈α∂own␈α∂actions␈α∞by
␈↓ α∧␈↓assuming␈α∂that␈α∂␈↓↓they␈α∞will␈α∂act␈α∂in␈α∞ways␈α∂that␈α∂they␈α∞believe␈α∂will␈α∂achieve␈α∞their␈α∂goals␈↓.␈α∂ Our␈α∂robot␈α∞might
␈↓ α∧␈↓acquire␈α
these␈α
mental␈α
theories␈αin␈α
several␈α
ways:␈α
First,␈α
we␈αmight␈α
design␈α
the␈α
universal␈α
machine␈αso␈α
that
␈↓ α∧␈↓they␈αare␈αpresent␈α
in␈αthe␈αinitial␈α
configuration␈αof␈αthe␈α
world.␈α Second,␈αwe␈α
might␈αprogram␈αit␈αto␈α
acquire
␈↓ α∧␈↓this␈α∪ideas␈α∀by␈α∪induction␈α∪from␈α∀its␈α∪experience␈α∪and␈α∀even␈α∪transmit␈α∪them␈α∀to␈α∪others␈α∀through␈α∪an
␈↓ α∧␈↓"educational␈αsystem".␈α Third,␈αit␈αmight␈αderive␈αthe␈αpsychological␈αlaws␈αfrom␈αthe␈αfundamental␈αphysics
␈↓ α∧␈↓of␈α∞the␈α∂world␈α∞and␈α∂its␈α∞knowledge␈α∂of␈α∞the␈α∂initial␈α∞configuration,␈α∂and␈α∞finally,␈α∂it␈α∞might␈α∂discover␈α∞how
␈↓ α∧␈↓robots are built from Life cells by doing experimental "biology".
␈↓ α∧␈↓␈↓ αTKnowing␈α∩the␈α∩Life␈α∩physics␈α∩without␈α∪some␈α∩information␈α∩about␈α∩the␈α∩initial␈α∪configuration␈α∩is
␈↓ α∧␈↓insufficient␈α
to␈α
derive␈α
the␈α␈↓↓psychological␈↓␈α
laws,␈α
because␈α
robots␈α
can␈αbe␈α
constructed␈α
in␈α
the␈α
Life␈αworld␈α
in
␈↓ α∧␈↓an␈αinfinity␈αof␈αways.␈α This␈αfollows␈αfrom␈αthe␈α"folk␈αtheorem"␈αthat␈αthe␈αLife␈αautomaton␈αis␈αuniversal␈αin
␈↓ α∧␈↓the␈α∞sense␈α∞that␈α∞any␈α
cellular␈α∞automaton␈α∞can␈α∞be␈α∞constructed␈α
by␈α∞taking␈α∞sufficiently␈α∞large␈α∞squares␈α
of
␈↓ α∧␈↓Life cells as the basic cell of the other automaton.␈↓∧7␈↓
␈↓ α∧␈↓␈↓ αTMen␈α⊂are␈α⊃in␈α⊂a␈α⊂more␈α⊃difficult␈α⊂intellectual␈α⊂position␈α⊃than␈α⊂Life␈α⊂robots.␈α⊃ We␈α⊂don't␈α⊃know␈α⊂the
␈↓ α∧␈↓fundamental␈α⊂physics␈α⊂of␈α∂our␈α⊂world,␈α⊂and␈α⊂we␈α∂can't␈α⊂even␈α⊂be␈α⊂sure␈α∂that␈α⊂its␈α⊂fundamental␈α⊂physics␈α∂is
␈↓ α∧␈↓describable␈α∂in␈α∂finite␈α⊂terms.␈α∂ Even␈α∂if␈α∂we␈α⊂knew␈α∂the␈α∂physical␈α∂laws,␈α⊂they␈α∂seem␈α∂to␈α⊂preclude␈α∂precise
␈↓ α∧␈↓knowledge␈α∞of␈α
an␈α∞initial␈α∞state␈α
and␈α∞precise␈α∞calculation␈α
of␈α∞its␈α∞future␈α
both␈α∞for␈α∞quantum␈α
mechanical
␈↓ α∧␈↓reasons␈α∩and␈α⊃because␈α∩the␈α∩continuous␈α⊃functions␈α∩needed␈α⊃to␈α∩represent␈α∩fields␈α⊃seem␈α∩to␈α∩involve␈α⊃an
␈↓ α∧␈↓infinite amount of information.
␈↓ α∧␈↓␈↓ αTOne␈αpoint␈αof␈αthe␈αcellular␈αautomaton␈αrobot␈αexample␈αis␈αto␈αmake␈αplausible␈αthe␈αidea␈αthat␈α
much
␈↓ α∧␈↓of␈α
human␈α
mental␈αstructure␈α
is␈α
not␈α
an␈αaccident␈α
of␈α
evolution␈αor␈α
even␈α
of␈α
the␈αphysics␈α
of␈α
our␈αworld,␈α
but
␈↓ α∧␈↓is␈αrequired␈αfor␈αsuccessful␈αproblem␈αsolving␈αbehavior␈αand␈αmust␈αbe␈αdesigned␈αinto␈αor␈αevolved␈αby␈αany
␈↓ α∧␈↓system that exhibits such behavior.
␈↓ α∧␈↓␈↓ αT3.␈α→␈↓αComputer␈α→time-sharing␈α→systems.␈↓␈α→These␈α→complicated␈α→computer␈α→programs␈α_allocate
␈↓ α∧␈↓computer␈αtime␈αand␈α
other␈αresources␈αamong␈α
users.␈α They␈αallow␈α
each␈αuser␈αof␈α
the␈αcomputer␈αto␈α
behave
␈↓ α∧␈↓␈↓ εu13␈↓ ∧
␈↓ α∧␈↓as␈αthough␈αhe␈α
had␈αa␈αcomputer␈αof␈α
his␈αown,␈αbut␈αalso␈α
allow␈αthem␈αto␈αshare␈α
files␈αof␈αdata␈αand␈α
programs
␈↓ α∧␈↓and␈α
to␈α
communicate␈α
with␈α
each␈α
other.␈α
They␈α
are␈α
often␈α
used␈α
for␈α
many␈α
years␈α
with␈α∞continual␈α
small
␈↓ α∧␈↓changes,␈αand␈αand␈αthe␈αpeople␈αmaking␈αthe␈αchanges␈αand␈αcorrecting␈αerrors␈αare␈αoften␈αdifferent␈αnot␈αthe
␈↓ α∧␈↓original␈αauthors␈αof␈αthe␈αsystem.␈α A␈αperson␈αconfronted␈αwith␈αthe␈αtask␈αof␈αcorrecting␈αa␈αmalfunction␈αor
␈↓ α∧␈↓making a change in a time-sharing system can conveniently use a mentalistic model of the system.
␈↓ α∧␈↓␈↓ αTThus␈α⊂suppose␈α∂a␈α⊂user␈α∂complains␈α⊂that␈α⊂the␈α∂system␈α⊂will␈α∂not␈α⊂run␈α∂his␈α⊂program.␈α⊂ Perhaps␈α∂the
␈↓ α∧␈↓system␈αbelieves␈α
that␈αhe␈αdoesn't␈α
want␈αto␈α
run,␈αperhaps␈αit␈α
persistently␈αbelieves␈αthat␈α
he␈αhas␈α
just␈αrun,
␈↓ α∧␈↓perhaps␈αit␈αbelieves␈αthat␈αhis␈αquota␈αof␈αcomputer␈αresources␈αis␈αexhausted,␈αor␈αperhaps␈αit␈αbelieves␈αthat
␈↓ α∧␈↓his␈αprogram␈αrequires␈αa␈αresource␈αthat␈αis␈αunavailable.␈α Testing␈αthese␈αhypotheses␈αcan␈αoften␈α
be␈αdone
␈↓ α∧␈↓with surprisingly little understanding of the internal workings of the program.
␈↓ α∧␈↓␈↓ αT4.␈α∩␈↓αPrograms␈α⊃designed␈α∩to␈α⊃reason.␈↓␈α∩Suppose␈α∩we␈α⊃explicitly␈α∩design␈α⊃a␈α∩program␈α∩to␈α⊃represent
␈↓ α∧␈↓information␈αby␈αsentences␈αin␈αa␈αcertain␈αlanguage␈αstored␈αin␈αthe␈αmemory␈αof␈αthe␈αcomputer␈αand␈αdecide
␈↓ α∧␈↓what␈αto␈α
do␈αby␈α
making␈αinferences,␈αand␈α
doing␈αwhat␈α
it␈αconcludes␈αwill␈α
advance␈αits␈α
goals.␈α Naturally,
␈↓ α∧␈↓we␈αwould␈αhope␈α
that␈αour␈αprevious␈α
second␈αorder␈αdefinition␈α
of␈αbelief␈αwill␈α
"approve␈αof"␈αa␈α␈↓↓B(p,s)␈↓␈α
that
␈↓ α∧␈↓ascribed␈α∩to␈α∩the␈α⊃program␈α∩believing␈α∩the␈α⊃sentences␈α∩explicitly␈α∩built␈α⊃in.␈α∩ We␈α∩would␈α∩be␈α⊃somewhat
␈↓ α∧␈↓embarassed␈αif␈αsomeone␈αwere␈αto␈αshow␈αthat␈α
our␈αsecond␈αorder␈αdefinition␈αapproved␈αas␈αwell␈α
or␈αbetter
␈↓ α∧␈↓of an entirely different set of beliefs.
␈↓ α∧␈↓␈↓ αTSuch a program was first proposed in (McCarthy 1960), and here is how it might work:
␈↓ α∧␈↓␈↓ αTInformation␈αabout␈αthe␈αworld␈αis␈αstored␈αin␈αa␈αwide␈αvariety␈αof␈αdata␈αstructures.␈α For␈αexample,␈αa
␈↓ α∧␈↓visual␈α∂scene␈α∂received␈α∂by␈α∂a␈α∂TV␈α∂camera␈α∂may␈α∂be␈α∂represented␈α∂by␈α∂a␈α∂512x512x3␈α∂array␈α⊂of␈α∂numbers
␈↓ α∧␈↓representing␈αthe␈αintensities␈αof␈αthree␈α
colors␈αat␈αthe␈αpoints␈αof␈α
the␈αvisual␈αfield.␈α At␈αanother␈α
level,␈αthe
␈↓ α∧␈↓same␈αscene␈α
may␈αbe␈α
represented␈αby␈α
a␈αlist␈α
of␈αregions,␈αand␈α
at␈αa␈α
further␈αlevel␈α
there␈αmay␈α
be␈αa␈α
list␈αof
␈↓ α∧␈↓physical␈α
objects␈α
and␈αtheir␈α
parts␈α
together␈αwith␈α
other␈α
information␈αabout␈α
these␈α
objects␈αobtained␈α
from
␈↓ α∧␈↓non-visual␈αsources.␈α Moreover,␈αinformation␈αabout␈αhow␈α
to␈αsolve␈αvarious␈αkinds␈αof␈αproblems␈αmay␈α
be
␈↓ α∧␈↓represented by programs in some programming language.
␈↓ α∧␈↓␈↓ αTHowever,␈α⊂all␈α⊂the␈α⊂above␈α∂representations␈α⊂are␈α⊂subordinate␈α⊂to␈α∂a␈α⊂collection␈α⊂of␈α⊂sentences␈α⊂in␈α∂a
␈↓ α∧␈↓suitable␈α∂first␈α∂order␈α∂language␈α∞that␈α∂includes␈α∂set␈α∂theory.␈α∂ By␈α∞subordinate,␈α∂we␈α∂mean␈α∂that␈α∂there␈α∞are
␈↓ α∧␈↓sentences␈α
that␈α
tell␈α
what␈α
the␈α
data␈α
structures␈α
represent␈α
and␈α
what␈α
the␈α
programs␈α
do.␈α
New␈αsentences
␈↓ α∧␈↓can␈αarise␈αby␈αa␈αvariety␈αof␈αprocesses:␈αinference␈αfrom␈αsentences␈αalready␈αpresent,␈αby␈αcomputation␈αfrom
␈↓ α∧␈↓the data structures representing observations, ...
␈↓ α∧␈↓→→→→→There will be more here about what mental qualities should be programmed.←←←
␈↓ α∧␈↓␈↓ εu14␈↓ ∧
␈↓ α∧␈↓α␈↓ ∧j"GLOSSARY" OF MENTAL QUALITIES
␈↓ α∧␈↓␈↓ αTIn␈α
this␈α
section␈α
we␈α∞give␈α
short␈α
"definitions"␈α
for␈α
machines␈α∞of␈α
a␈α
collection␈α
of␈α∞mental␈α
qualities.
␈↓ α∧␈↓We␈αinclude␈αa␈αnumber␈αof␈α
terms␈αwhich␈αgive␈αus␈αdifficulty␈α
with␈αan␈αindication␈αof␈αwhat␈αthe␈α
difficulties
␈↓ α∧␈↓seem to be.
␈↓ α∧␈↓␈↓ αT1.␈α
␈↓αActions␈↓.␈α
We␈α
want␈α
to␈α
distinguish␈α
the␈α
actions␈α
of␈α
a␈α
being␈α
from␈α
events␈α
that␈α
occur␈α
in␈α
its␈α
body
␈↓ α∧␈↓and␈α
that␈αaffect␈α
the␈α
outside␈αworld.␈α
For␈α
example,␈αwe␈α
wish␈αto␈α
distinguish␈α
a␈αrandom␈α
twitch␈α
from␈αa
␈↓ α∧␈↓purposeful␈α∞movement.␈α
This␈α∞is␈α
not␈α∞difficult␈α
␈↓↓relative␈α∞to␈α
a␈α∞theory␈α
of␈α∞belief␈α
that␈α∞includes␈α
intentions␈↓.
␈↓ α∧␈↓One's␈α⊂purposeful␈α⊃actions␈α⊂are␈α⊂those␈α⊃that␈α⊂would␈α⊃have␈α⊂been␈α⊂different␈α⊃had␈α⊂one's␈α⊃intentions␈α⊂been
␈↓ α∧␈↓different.␈α∞ This␈α
requires␈α∞that␈α
the␈α∞theory␈α
of␈α∞belief␈α
have␈α∞sufficient␈α
Cartesian␈α∞product␈α∞structure␈α
so
␈↓ α∧␈↓that␈αthe␈αcounterfactual␈αconditional␈α`"if␈αits␈α
intentions␈αhad␈αbeen␈αdifferent"␈αis␈αdefined␈αin␈α
the␈αtheory.
␈↓ α∧␈↓As␈α
explained␈α
in␈α∞the␈α
section␈α
on␈α
definitions␈α∞relative␈α
to␈α
an␈α
approximate␈α∞theory,␈α
it␈α
is␈α∞not␈α
necessary
␈↓ α∧␈↓that the counterfactual be given a meaning in terms of the real world.
␈↓ α∧␈↓␈↓ αT2. ␈↓αIntrospection and self-knowledge.␈↓
␈↓ α∧␈↓␈↓ αTWe␈αsay␈αthat␈αa␈α
machine␈αintrospects␈αwhen␈αit␈α
comes␈αto␈αhave␈αbeliefs␈α
about␈αits␈αown␈αmental␈α
state.
␈↓ α∧␈↓A␈α∂simple␈α∂form␈α∂of␈α∞introspection␈α∂takes␈α∂place␈α∂when␈α∂a␈α∞program␈α∂determines␈α∂whether␈α∂it␈α∂has␈α∞certain
␈↓ α∧␈↓information␈αand␈αif␈αnot␈αasks␈αfor␈αit.␈α Often␈αan␈αoperating␈αsystem␈αwill␈αcompute␈αa␈αcheck␈αsum␈αof␈αitself
␈↓ α∧␈↓every few minutes to verify that it hasn't been changed by a software or hardware malfunction.
␈↓ α∧␈↓␈↓ αTIn␈α⊂principle,␈α⊂introspection␈α⊂is␈α⊂easier␈α⊂for␈α⊂computer␈α⊂programs␈α⊂than␈α⊂for␈α⊂people,␈α⊂because␈α⊂the
␈↓ α∧␈↓entire␈α⊂memory␈α⊂in␈α⊂which␈α⊂programs␈α⊂and␈α⊂data␈α⊃are␈α⊂stored␈α⊂is␈α⊂available␈α⊂for␈α⊂inspection.␈α⊂ In␈α⊃fact,␈α⊂a
␈↓ α∧␈↓computer␈αprogram␈αcan␈αbe␈αmade␈αto␈αpredict␈αhow␈αit␈αwould␈αreact␈αto␈αparticular␈αinputs␈αprovided␈αit␈α
has
␈↓ α∧␈↓enough␈α
free␈α
storage␈α
to␈αperform␈α
the␈α
calculation.␈α
This␈α
situation␈αsmells␈α
of␈α
paradox,␈α
and␈α
there␈αis␈α
one.
␈↓ α∧␈↓Namely,␈α∂if␈α∂a␈α∂program␈α∂could␈α∂predict␈α∂its␈α∂own␈α∂actions␈α∞in␈α∂less␈α∂time␈α∂than␈α∂it␈α∂takes␈α∂to␈α∂carry␈α∂out␈α∞the
␈↓ α∧␈↓action,␈αit␈αcould␈αrefuse␈αto␈αdo␈αwhat␈αit␈αhas␈αpredicted␈αfor␈αitself.␈α This␈αonly␈αshows␈αthat␈αself-simulation
␈↓ α∧␈↓is necessarily a slow process, and this is not surprising.
␈↓ α∧␈↓␈↓ αTHowever,␈α
present␈α
programs␈α
do␈α
little␈α
interesting␈α
introspection.␈α
This␈α
is␈α
just␈α
a␈α
matter␈α∞of␈α
the
␈↓ α∧␈↓undeveloped␈αstate␈αof␈αartificial␈αintelligence;␈αprogrammers␈αdon't␈αyet␈αknow␈αhow␈αto␈αmake␈αa␈αcomputer
␈↓ α∧␈↓program look at itself in a useful way.
␈↓ α∧␈↓3.␈α
␈↓αConsciousness␈αand␈α
self-consciousness␈↓.␈α
Suppose␈αwe␈α
wish␈α
to␈αdistinguish␈α
the␈α
self-awareness␈αof␈α
a
␈↓ α∧␈↓machine,␈α
animal␈α
or␈α
person␈α
from␈α
its␈α
awareness␈α
of␈α
other␈α
things.␈α
We␈α
explicate␈α
awareness␈α
as␈αbelief␈α
in
␈↓ α∧␈↓certain␈αsentences,␈αso␈α
in␈αthis␈αcase␈α
we␈αare␈αwant␈αto␈α
distinguish␈αthose␈αsentences␈α
or␈αthose␈αterms␈α
in␈αthe
␈↓ α∧␈↓sentences␈α
that␈α
may␈α
be␈α
considered␈α
to␈α
be␈α
about␈α
the␈α
self.␈α
We␈α
also␈α
don't␈α
expect␈αthat␈α
self-consciousness
␈↓ α∧␈↓will␈αbe␈αa␈αsingle␈αproperty␈αthat␈αsomething␈αeither␈αhas␈αor␈αhasn't␈αbut␈αrather␈αthere␈αwill␈αbe␈αmany␈αkinds
␈↓ α∧␈↓of self-awareness with humans posessing many of the kinds we can imagine.
␈↓ α∧␈↓␈↓ αTHere are some of the kinds of self-awareness:
␈↓ α∧␈↓␈↓ β$3.1.␈α∂Certain␈α⊂predicates␈α∂of␈α∂the␈α⊂situation␈α∂(propositional␈α∂fluents␈α⊂in␈α∂the␈α⊂terminology␈α∂of
␈↓ α∧␈↓(McCarthy␈α∂and␈α∞Hayes␈α∂1970))␈α∂are␈α∞directly␈α∂observable␈α∞in␈α∂almost␈α∂all␈α∞situations␈α∂while␈α∂others␈α∞often
␈↓ α∧␈↓must␈α∞be␈α∞inferred.␈α∞ The␈α∞almost␈α∞always␈α∂observable␈α∞fluents␈α∞may␈α∞reasonably␈α∞be␈α∞identified␈α∂with␈α∞the
␈↓ α∧␈↓senses.␈α
Likewise␈α
the␈α
values␈α
of␈α
certain␈α
fluents␈α
are␈α
almost␈α
always␈α
under␈α
the␈α
control␈α
of␈α
the␈α
being␈α
and
␈↓ α∧␈↓␈↓ εu15␈↓ ∧
␈↓ α∧␈↓can␈α⊂be␈α⊂called␈α∂motor␈α⊂parameters␈α⊂for␈α∂lack␈α⊂of␈α⊂a␈α∂common␈α⊂language␈α⊂term.␈α∂ We␈α⊂have␈α⊂in␈α⊂mind␈α∂the
␈↓ α∧␈↓positions␈α∩of␈α∩the␈α∩joints.␈α∩ Most␈α∩motor␈α⊃parameters␈α∩are␈α∩both␈α∩observable␈α∩and␈α∩controllable.␈α∩ I␈α⊃am
␈↓ α∧␈↓inclined␈αto␈α
regard␈αthe␈α
posession␈αof␈α
a␈αsubstantial␈αset␈α
of␈αsuch␈α
constantly␈αobservable␈α
or␈αcontrollable
␈↓ α∧␈↓fluents␈α
as␈αthe␈α
most␈αprimitive␈α
form␈α
of␈αself-consciousness,␈α
but␈αI␈α
have␈αno␈α
strong␈α
arguments␈αagainst
␈↓ α∧␈↓someone who wished to require more.
␈↓ α∧␈↓␈↓ β$3.2.␈αThe␈αsecond␈αlevel␈αof␈αself-consciousness␈αrequires␈αa␈αterm␈α␈↓↓I␈↓␈αin␈αthe␈αlanguage␈αdenoting
␈↓ α∧␈↓the␈αself.␈α ␈↓↓I␈↓␈αshould␈αbelong␈αto␈αthe␈αclass␈αof␈αpersistent␈αobjects␈αand␈αsome␈αof␈αthe␈αsame␈αpredicates␈αshould
␈↓ α∧␈↓be␈α∂applicable␈α∂to␈α∂it␈α∂as␈α∂are␈α∂applicable␈α∂to␈α∂other␈α∂objects.␈α∂ For␈α∂example,␈α∂like␈α∂other␈α∂objects␈α∂␈↓↓I␈↓␈α∂has␈α∂a
␈↓ α∧␈↓location␈αthat␈αcan␈αchange␈αin␈αtime.␈α ␈↓↓I␈↓␈αis␈αalso␈αvisible␈αand␈αimpenetrable␈αlike␈αother␈αobjects.␈α However,
␈↓ α∧␈↓we␈αdon't␈αwant␈αto␈αget␈αcarried␈αaway␈αin␈αregarding␈αa␈αphysical␈αbody␈αas␈αa␈αnecessary␈αcondition␈αfor␈αself-
␈↓ α∧␈↓consciousness.␈α Imagine␈α
a␈αdistributed␈α
computer␈αwhose␈α
sense␈αand␈αmotor␈α
organs␈αcould␈α
also␈αbe␈α
in␈αa
␈↓ α∧␈↓variety of places. We don't want to exclude it from self-consciousness by definition.
␈↓ α∧␈↓␈↓ β$3.3.␈α∪The␈α∪third␈α∪level␈α∪come␈α∪when␈α∪␈↓↓I␈↓␈α∪is␈α∪regarded␈α∪as␈α∪an␈α∪actor␈α∪among␈α∪others.␈α∪ The
␈↓ α∧␈↓conditions␈αthat␈αpermit␈α␈↓↓I␈↓␈αto␈αdo␈αsomething␈αare␈αsimilar␈αto␈αthe␈αconditions␈αthat␈αpermit␈αother␈αactors␈αto
␈↓ α∧␈↓do similar things.
␈↓ α∧␈↓␈↓ β$3.4.␈α
The␈α
fourth␈α∞level␈α
requires␈α
the␈α
applicability␈α∞of␈α
predicates␈α
such␈α
as␈α∞␈↓↓believes,␈↓␈α
␈↓↓wants␈↓
␈↓ α∧␈↓and␈α␈↓↓can␈↓␈αto␈α
␈↓↓I.␈↓␈αBeliefs␈αabout␈α
past␈αsituations␈αand␈α
the␈αability␈αto␈α
hypothesize␈αfuture␈αsituations␈αare␈α
also
␈↓ α∧␈↓required for this level.
␈↓ α∧␈↓4.␈α␈↓αLanguage␈αand␈αthought␈↓.␈α Here␈αis␈αa␈αhypothesis␈αarising␈αfrom␈αartificial␈αintelligence␈αconcerning␈αthe
␈↓ α∧␈↓relation␈αbetween␈α
language␈αand␈α
thought.␈α Imagine␈αa␈α
person␈αor␈α
machine␈αthat␈αrepresents␈α
information
␈↓ α∧␈↓internally␈α
in␈α∞a␈α
huge␈α
network.␈α∞ Each␈α
node␈α
of␈α∞the␈α
network␈α
has␈α∞references␈α
to␈α
other␈α∞nodes␈α
through
␈↓ α∧␈↓relations.␈α⊃ (If␈α⊃the␈α⊃system␈α⊃has␈α⊃a␈α⊃variable␈α⊃collection␈α⊃of␈α⊃relations,␈α⊃then␈α⊃the␈α⊃relations␈α⊃have␈α⊃to␈α⊃be
␈↓ α∧␈↓represented␈αby␈αnodes,␈αand␈αwe␈αget␈αa␈αsymmetrical␈αtheory␈αif␈αwe␈αsuppose␈αthat␈αeach␈αnode␈αis␈αconnected
␈↓ α∧␈↓to␈αa␈α
set␈αof␈αpairs␈α
of␈αother␈α
nodes).␈α We␈αcan␈α
imagine␈αthis␈α
structure␈αto␈αhave␈α
a␈αlong␈α
term␈αpart␈αand␈α
also
␈↓ α∧␈↓extremely␈α⊂temporary␈α⊂parts␈α⊂representing␈α∂current␈α⊂␈↓↓thoughts␈↓.␈α⊂ Naturally,␈α⊂each␈α∂being␈α⊂has␈α⊂a␈α⊂its␈α∂own
␈↓ α∧␈↓network␈α∞depending␈α
on␈α∞its␈α∞own␈α
experience.␈α∞A␈α∞thought␈α
is␈α∞then␈α∞a␈α
temporary␈α∞node␈α∞currently␈α
being
␈↓ α∧␈↓referenced␈α⊂by␈α⊂the␈α∂mechanism␈α⊂of␈α⊂consciousness.␈α∂ Its␈α⊂meaning␈α⊂is␈α∂determined␈α⊂by␈α⊂its␈α⊂references␈α∂to
␈↓ α∧␈↓other␈αnodes␈αwhich␈αin␈αturn␈αrefer␈αto␈αyet␈αother␈αnodes.␈α Now␈αconsider␈αthe␈αproblem␈αof␈αcommunicating
␈↓ α∧␈↓a thought to another being.
␈↓ α∧␈↓␈↓ αTIts␈α
full␈α
communication␈α∞would␈α
involve␈α
transmitting␈α
the␈α∞entire␈α
network␈α
that␈α
can␈α∞be␈α
reached
␈↓ α∧␈↓from␈α⊂the␈α⊂given␈α⊂node,␈α⊂and␈α⊂this␈α⊂would␈α⊂ordinarily␈α⊂constitute␈α⊂the␈α⊂entire␈α⊂experience␈α⊂of␈α⊂the␈α⊂being.
␈↓ α∧␈↓More␈αthan␈αthat,␈αit␈αwould␈αbe␈αnecessary␈αto␈αalso␈αcommunicate␈αthe␈αprograms␈αthat␈αthat␈αtake␈αaction␈αon
␈↓ α∧␈↓the␈αbasis␈αof␈αencountering␈αcertain␈αnodes.␈α Even␈αif␈αall␈αthis␈αcould␈αbe␈αtransmitted,␈αthe␈αrecipient␈αwould
␈↓ α∧␈↓still␈α∪have␈α∩to␈α∪find␈α∩equivalents␈α∪for␈α∩the␈α∪information␈α∩in␈α∪terms␈α∩of␈α∪its␈α∩own␈α∪network.␈α∩ Therefore,
␈↓ α∧␈↓thoughts have to be translated into a public language before they can be commuunicated.
␈↓ α∧␈↓␈↓ αTA␈αlanguage␈αis␈αalso␈αa␈αnetwork␈αof␈αassociations␈αand␈αprograms.␈α However,␈αcertain␈αof␈αthe␈αnodes
␈↓ α∧␈↓in␈α∂this␈α∂network␈α∞(more␈α∂accurately␈α∂a␈α∞␈↓↓family␈↓␈α∂of␈α∂networks,␈α∞since␈α∂no␈α∂two␈α∞people␈α∂speak␈α∂precisely␈α∞the
␈↓ α∧␈↓same␈α
language)␈α
are␈αassociated␈α
with␈α
words␈αor␈α
set␈α
phrases.␈α Sometimes␈α
the␈α
translation␈αfrom␈α
thoughts
␈↓ α∧␈↓to␈α
sentences␈α
is␈α
easy,␈α
because␈αlarge␈α
parts␈α
of␈α
the␈α
private␈αnetworks␈α
are␈α
taken␈α
from␈α
the␈αpublic␈α
network,
␈↓ α∧␈↓and␈α
there␈αis␈α
an␈α
advantage␈αin␈α
preserving␈αthe␈α
correspondence.␈α
However,␈αthe␈α
translation␈α
is␈αalways
␈↓ α∧␈↓approximate␈α∂(in␈α∂sense␈α∂that␈α∂still␈α∂lacks␈α∂a␈α∂technical␈α∂definition),␈α∂and␈α∂some␈α∂areas␈α∂of␈α∂experience␈α∂are
␈↓ α∧␈↓␈↓ εu16␈↓ ∧
␈↓ α∧␈↓difficult␈α∪to␈α∀translate␈α∪at␈α∪all.␈α∀ Sometimes␈α∪this␈α∀is␈α∪for␈α∪intrinsic␈α∀reasons,␈α∪and␈α∀sometimes␈α∪because
␈↓ α∧␈↓particular␈αcultures␈αdon't␈αuse␈αlanguage␈αin␈αthis␈αarea.␈α (It␈αis␈αmy␈αimpression␈αthat␈αcultures␈αdiffer␈αin␈αthe
␈↓ α∧␈↓extent␈αto␈αwhich␈αinformation␈αabout␈αfacial␈α
appearance␈αthat␈αcan␈αbe␈αused␈αfor␈αrecognition␈α
is␈αverbally
␈↓ α∧␈↓transmitted).␈α According␈αto␈αthis␈αscheme,␈αthe␈α"deep␈αstructure"␈αof␈αa␈αpublicly␈αexpressible␈αthought␈αis␈αa
␈↓ α∧␈↓node␈αin␈αthe␈αpublic␈αnetwork.␈α It␈αis␈αtranslated␈αinto␈αthe␈αdeep␈αstructure␈αof␈αa␈αsentence␈αas␈αa␈αtree␈αwhose
␈↓ α∧␈↓terminal␈α
nodes␈α
are␈α
the␈α
nodes␈α
to␈α
which␈αwords␈α
or␈α
set␈α
phrases␈α
are␈α
attached.␈α
This␈α
"deep␈αstructure"
␈↓ α∧␈↓then must be translated into a string in a spoken or written language.
␈↓ α∧␈↓␈↓ αTThe␈α
need␈αto␈α
use␈αlanguage␈α
to␈αexpress␈α
thought␈αalso␈α
applies␈αwhen␈α
we␈αhave␈α
to␈αascribe␈α
thoughts
␈↓ α∧␈↓to other beings, since we cannot put the entire network into a single sentence.
␈↓ α∧␈↓␈↓ αT5. ␈↓αIntentions.␈↓
␈↓ α∧␈↓␈↓ αTWe␈α⊂may␈α⊂say␈α⊂that␈α⊂a␈α⊂machine␈α⊂intends␈α⊂to␈α⊂perform␈α⊂an␈α⊂action␈α⊂when␈α⊂it␈α⊂believes␈α⊂that␈α⊂it␈α⊂will
␈↓ α∧␈↓perform␈αthe␈α
action␈αand␈α
it␈αbelieves␈α
that␈αthe␈α
action␈αwill␈α
further␈αa␈α
goal.␈α However,␈α
further␈αanalysis
␈↓ α∧␈↓may␈αshow␈α
that␈αno␈αsuch␈α
first␈αorder␈α
definition␈αin␈αterms␈α
of␈αbelief␈α
adequately␈αdescribes␈αintentions.␈α
In
␈↓ α∧␈↓this␈α
case,␈α
we␈α
can␈α
try␈α
a␈α
second␈α
order␈α
definition␈α
based␈α
on␈α
an␈α
axiomatization␈α
of␈α
a␈α
predicate␈α
␈↓↓I(a,s)␈↓
␈↓ α∧␈↓meaning that the machine intends the action ␈↓↓a␈↓ when it is in state ␈↓↓s. ␈↓
␈↓ α∧␈↓6. ␈↓αFree will␈↓
␈↓ α∧␈↓␈↓ αTWhen␈α∞we␈α∞program␈α∞a␈α∞computer␈α∞to␈α∞make␈α∞choices␈α∞intelligently␈α∞after␈α∞determining␈α∂its␈α∞options,
␈↓ α∧␈↓examining␈αtheir␈αconsequences,␈α
and␈αdeciding␈αwhich␈α
is␈αmost␈αfavorable␈α
or␈αmost␈αmoral␈α
or␈αwhatever,
␈↓ α∧␈↓we␈α
must␈αprogram␈α
it␈α
to␈αtake␈α
an␈αattitude␈α
towards␈α
its␈αfreedom␈α
of␈α
choice␈αessentially␈α
isomorphic␈αto␈α
that
␈↓ α∧␈↓which a human must take to his own.
␈↓ α∧␈↓␈↓ αTWe␈α⊂can␈α⊂define␈α⊂whether␈α⊂a␈α⊂particular␈α⊂action␈α⊂was␈α⊂free␈α⊂or␈α⊂forced␈α⊂relative␈α⊂to␈α⊂a␈α⊂theory␈α⊂that
␈↓ α∧␈↓ascribes␈αbeliefs␈αand␈αwithin␈αwhich␈α
beings␈αdo␈αwhat␈αthey␈αbelieve␈α
will␈αadvance␈αtheir␈αgoals.␈α In␈αsuch␈α
a
␈↓ α∧␈↓theory,␈α∞action␈α∞is␈α∞precipitated␈α∞by␈α∞a␈α∞belief␈α∞of␈α∞the␈α∂form␈α∞␈↓↓I␈α∞should␈α∞do␈α∞X␈α∞now␈↓.␈α∞ We␈α∞will␈α∞say␈α∂that␈α∞the
␈↓ α∧␈↓action␈αwas␈αfree␈αif␈α
changing␈αthe␈αbelief␈αto␈α
␈↓↓I␈αshouldn't␈αdo␈αX␈αnow␈↓␈α
would␈αhave␈αresulted␈αin␈α
the␈αaction
␈↓ α∧␈↓not␈α∞being␈α∞performed.␈α∞ This␈α∞requires␈α∞that␈α
the␈α∞theory␈α∞of␈α∞belief␈α∞have␈α∞sufficient␈α∞Cartesian␈α
product
␈↓ α∧␈↓structure␈αso␈αthat␈αchanging␈αa␈αsingle␈αbelief␈αis␈αdefined,␈αbut␈αit␈αdoesn't␈αrequire␈αdefining␈αwhat␈αthe␈αstate
␈↓ α∧␈↓of the world would be if a single belief were different.
␈↓ α∧␈↓␈↓ αTThis␈α
isn't␈α
the␈α
whole␈αfree␈α
will␈α
story,␈α
because␈α
moralists␈αare␈α
also␈α
concerned␈α
with␈αwhether␈α
praise
␈↓ α∧␈↓or␈αblame␈αmay␈αbe␈αattributed␈α
to␈αa␈αchoice.␈α The␈αfollowing␈α
considerations␈αwould␈αseem␈αto␈αapply␈αto␈α
any
␈↓ α∧␈↓attempt to define the morality of actions in a way that would apply to machines:
␈↓ α∧␈↓␈↓ β$6.1.␈αThere␈αis␈αunlikely␈αto␈αbe␈αa␈αsimple␈αbehavioral␈αdefinition.␈α Instead␈αthere␈αwould␈αbe␈αa
␈↓ α∧␈↓second order definition criticizing predicates that ascribe morality to actions.
␈↓ α∧␈↓␈↓ β$6.2.␈α∩The␈α∪theory␈α∩must␈α∪contain␈α∩at␈α∩least␈α∪one␈α∩axiom␈α∪of␈α∩morality␈α∩that␈α∪is␈α∩not␈α∪just␈α∩a
␈↓ α∧␈↓statement of physical fact. Relative to this axiom, moral judgments of actions can be factual.
␈↓ α∧␈↓␈↓ β$6.3.␈αThe␈αtheory␈αof␈αmorality␈αwill␈αpresuppose␈αa␈αtheory␈αof␈αbelief␈αin␈αwhich␈αstatements␈αof
␈↓ α∧␈↓the␈αform␈α␈↓↓"It␈αbelieved␈αthe␈αaction␈αwould␈αharm␈αsomeone"␈↓␈αare␈αdefined.␈α The␈αtheory␈αmust␈αascribe␈αbeliefs
␈↓ α∧␈↓about others' welfare and perhaps about the being's own welfare.
␈↓ α∧␈↓␈↓ εu17␈↓ ∧
␈↓ α∧␈↓␈↓ β$6.4.␈α⊂It␈α⊃might␈α⊂be␈α⊃necessary␈α⊂to␈α⊃consider␈α⊂the␈α⊂machine␈α⊃as␈α⊂imbedded␈α⊃in␈α⊂some␈α⊃kind␈α⊂of
␈↓ α∧␈↓society in order to ascribe morality to its actions.
␈↓ α∧␈↓␈↓ β$6.5.␈αNo␈αpresent␈αmachines␈αadmit␈αsuch␈αa␈αbelief␈αstructure,␈αand␈αno␈αsuch␈αstructure␈αmay␈αbe
␈↓ α∧␈↓required␈α∂to␈α∂make␈α∂a␈α∂machine␈α∂with␈α∂arbitrarily␈α∂high␈α∂intelligence␈α∂in␈α∂the␈α∂sense␈α∂of␈α∞problem-solving
␈↓ α∧␈↓ability.
␈↓ α∧␈↓␈↓ β$6.6.␈α∂It␈α∂seems␈α⊂unlikely␈α∂that␈α∂morally␈α∂judgable␈α⊂machines␈α∂or␈α∂machines␈α∂to␈α⊂which␈α∂rights
␈↓ α∧␈↓might legitimately be ascribed should be made if and when it becomes possible to do so.
␈↓ α∧␈↓→→→→→→More mental qualities will be discussed.←←←←←←←←←
␈↓ α∧␈↓␈↓ εu18␈↓ ∧
␈↓ α∧␈↓α␈↓ ¬(OTHER VIEWS ABOUT MIND
␈↓ α∧␈↓→→→→→This section will be written←←←←←
␈↓ α∧␈↓␈↓ εu19␈↓ ∧
␈↓ α∧␈↓αNOTES
␈↓ α∧␈↓1.␈α
Work␈α
in␈α∞artificial␈α
intelligence␈α
is␈α∞still␈α
far␈α
from␈α
showing␈α∞how␈α
to␈α
reach␈α∞human-level␈α
intellectual
␈↓ α∧␈↓performance.␈α∞Our␈α∞approach␈α∞to␈α∞the␈α∞AI␈α∞problem␈α∞involves␈α∞identifying␈α∞the␈α∞intellectual␈α
mechanisms
␈↓ α∧␈↓required␈α
for␈α
problem␈α
solving␈α
and␈α
describing␈α
them␈αprecisely.␈α
Therefore␈α
we␈α
are␈α
at␈α
the␈α
end␈α
of␈αthe
␈↓ α∧␈↓philosophical␈α⊃spectrum␈α∩that␈α⊃requires␈α⊃everything␈α∩to␈α⊃be␈α⊃formalized␈α∩in␈α⊃mathematical␈α⊃logic.␈α∩It␈α⊃is
␈↓ α∧␈↓sometimes␈α∂said␈α∂that␈α∂one␈α∂studies␈α∂philosophy␈α⊂in␈α∂order␈α∂to␈α∂advance␈α∂beyond␈α∂one's␈α⊂untutored␈α∂naive
␈↓ α∧␈↓world-view,␈α∩but␈α∩unfortunately␈α∩for␈α∪artificial␈α∩intelligence,␈α∩no-one␈α∩has␈α∪yet␈α∩been␈α∩able␈α∩to␈α∪give␈α∩a
␈↓ α∧␈↓description␈α⊃of␈α⊃even␈α⊂a␈α⊃naive␈α⊃world-view,␈α⊂complete␈α⊃and␈α⊃precise␈α⊂enough␈α⊃to␈α⊃allow␈α⊃a␈α⊂knowledge-
␈↓ α∧␈↓seeking program to be constructed in accordance with its tenets.
␈↓ α∧␈↓2.␈α
Present␈α
AI␈α
programs␈α
operate␈α
in␈α
limited␈α
domains,␈α
e.g.␈α
play␈α
particular␈α
games,␈α
prove␈α
theorems␈αin␈α
a
␈↓ α∧␈↓particular␈α
logical␈α∞system,␈α
or␈α
understand␈α∞natural␈α
language␈α
sentences␈α∞covering␈α
a␈α∞particular␈α
subject
␈↓ α∧␈↓matter␈α∞and␈α∞with␈α∞other␈α∞semantic␈α∞restrictions.␈α∞ General␈α∞intelligence␈α∞will␈α∞require␈α∞general␈α∂models␈α∞of
␈↓ α∧␈↓situations␈α
changing␈α∞in␈α
time,␈α∞actors␈α
with␈α∞goals␈α
and␈α∞strategies␈α
for␈α∞achieving␈α
them,␈α∞and␈α
knowledge
␈↓ α∧␈↓about how information can be obtained.
␈↓ α∧␈↓3.␈α This␈αkind␈αof␈αteleological␈αanalysis␈αis␈αoften␈αuseful␈αin␈αunderstanding␈αnatural␈αorganisms␈αas␈αwell␈αas
␈↓ α∧␈↓machines.␈α∪ Here␈α∪evolution␈α∪takes␈α∪the␈α∪place␈α∩of␈α∪design␈α∪and␈α∪we␈α∪often␈α∪understand␈α∪the␈α∩function
␈↓ α∧␈↓performed␈α⊃by␈α⊃an␈α⊃organ␈α⊃before␈α⊃we␈α⊃understand␈α⊃its␈α⊃detailed␈α⊃physiology.␈α⊃Teleological␈α⊃analysis␈α⊂is
␈↓ α∧␈↓applicable␈α∞to␈α∞psychological␈α
and␈α∞social␈α∞phenomena␈α∞in␈α
so␈α∞far␈α∞as␈α∞these␈α
are␈α∞designed␈α∞or␈α∞have␈α
been
␈↓ α∧␈↓subject␈α
to␈αselection.␈α
However,␈αteleological␈α
analysis␈αfails␈α
when␈αapplied␈α
to␈αaspects␈α
of␈α
nature␈αwhich
␈↓ α∧␈↓have␈α
neither␈αbeen␈α
designed␈α
nor␈αproduced␈α
by␈αnatural␈α
selection␈α
from␈αa␈α
population.␈α Much␈α
medieval
␈↓ α∧␈↓science␈α
was␈α
based␈α
on␈α
the␈α
Judeo-Christian-Moslem␈α
hypothesis␈α
that␈α
the␈α
details␈α
of␈α
the␈α
world␈αwere
␈↓ α∧␈↓designed␈αby␈α
God␈αfor␈α
the␈αbenefit␈α
of␈αman.␈α The␈α
strong␈αform␈α
of␈αthis␈α
hypothesis␈αwas␈α
abandoned␈αat
␈↓ α∧␈↓the␈α⊂time␈α⊃of␈α⊂Galileo␈α⊃and␈α⊂Newton␈α⊃but␈α⊂occasionally␈α⊃recurs.␈α⊂ Barry␈α⊃Commoner's␈α⊂(1972)␈α⊃axiom␈α⊂of
␈↓ α∧␈↓ecology␈α"Nature␈αknows␈αbest"␈αseems␈αto␈αbe␈αmistakenly␈αbased␈αon␈αthe␈αnotion␈αthat␈αnature␈αas␈αa␈αwhole␈αis
␈↓ α∧␈↓the result of an evolutionary process that selected the "best nature".
␈↓ α∧␈↓4. Novelty is not absolutely guaranteed.
␈↓ α∧␈↓5.␈αBehavioral␈αdefinitions␈αare␈αoften␈αfavored␈αin␈αphilosophy.␈α A␈αsystem␈αis␈αdefined␈αto␈αhave␈αa␈αcertain
␈↓ α∧␈↓quality␈αif␈αit␈αbehaves␈αin␈αa␈αcertain␈αway␈α
or␈αis␈α␈↓↓disposed␈↓␈αto␈αbehave␈αin␈αa␈αcertain␈αway.␈α
Their␈αostensible
␈↓ α∧␈↓virtue␈αis␈αconservatism;␈αthey␈αdon't␈αpostulate␈αinternal␈αstates␈αthat␈αare␈αunobservable␈αto␈αpresent␈αscience
␈↓ α∧␈↓and␈α∩may␈α∩remain␈α∩unobservable.␈α∩However,␈α∩such␈α∩definitions␈α∩are␈α∩awkward␈α∩for␈α∩mental␈α∩qualities,
␈↓ α∧␈↓because,␈αas␈αcommon␈αsense␈α
suggests,␈αa␈αmental␈αquality␈αmay␈α
not␈αresult␈αin␈αbehavior,␈α
because␈αanother
␈↓ α∧␈↓mental␈αquality␈αmay␈αprevent␈αit;␈αe.g.␈α I␈αmay␈αthink␈αyou␈αare␈αthick-headed,␈αbut␈αpoliteness␈αmay␈αprevent
␈↓ α∧␈↓my␈α∞saying␈α∂so.␈α∞Particular␈α∂difficulties␈α∞can␈α∂be␈α∞overcome,␈α∂but␈α∞an␈α∂impression␈α∞of␈α∂vagueness␈α∞remains.
␈↓ α∧␈↓The␈α∃liking␈α∃for␈α∃behavioral␈α∃definitions␈α∃stems␈α∃from␈α∃caution,␈α∃but␈α∃I␈α∃would␈α⊗interpret␈α∃scientific
␈↓ α∧␈↓experience␈α∂as␈α∂showing␈α∞that␈α∂boldness␈α∂in␈α∞postulating␈α∂complex␈α∂structures␈α∞of␈α∂unobserved␈α∂entities␈α∞-
␈↓ α∧␈↓provided␈αit␈αis␈αaccompanied␈αby␈αa␈αwillingness␈αto␈αtake␈αback␈αmistakes␈α-␈αis␈αmore␈αlikely␈αto␈αbe␈αrewarded
␈↓ α∧␈↓by␈α∩understanding␈α∪of␈α∩and␈α∪control␈α∩over␈α∪nature␈α∩than␈α∪is␈α∩positivistic␈α∪timidity.␈α∩ It␈α∪is␈α∩particularly
␈↓ α∧␈↓instructive␈α∩to␈α∪imagine␈α∩a␈α∩determined␈α∪behaviorist␈α∩trying␈α∩to␈α∪figure␈α∩out␈α∩an␈α∪electronic␈α∩computer.
␈↓ α∧␈↓Trying␈α↔to␈α↔define␈α↔each␈α↔quality␈α_behaviorally␈α↔would␈α↔get␈α↔him␈α↔nowhere;␈α_only␈α↔simultaneously
␈↓ α∧␈↓postulating␈α∞a␈α
complex␈α∞structure␈α∞including␈α
memory,␈α∞arithmetic␈α∞unit,␈α
control␈α∞structure,␈α∞and␈α
input-
␈↓ α∧␈↓output␈α⊂would␈α∂yield␈α⊂predictions␈α⊂that␈α∂could␈α⊂be␈α⊂compared␈α∂with␈α⊂experiment.␈α⊂ There␈α∂is␈α⊂a␈α⊂sense␈α∂in
␈↓ α∧␈↓which␈α
operational␈α
definitions␈α
are␈αnot␈α
taken␈α
seriously␈α
even␈αby␈α
their␈α
proposers.␈α
Suppose␈αsomeone
␈↓ α∧␈↓␈↓ εu20␈↓ ∧
␈↓ α∧␈↓gives␈α⊂an␈α∂operational␈α⊂definition␈α∂of␈α⊂length␈α∂(e.g.␈α⊂involving␈α∂a␈α⊂certain␈α∂platinum␈α⊂bar),␈α∂and␈α⊂a␈α∂whole
␈↓ α∧␈↓school␈α
of␈αphysicists␈α
and␈α
philosophers␈αbecomes␈α
quite␈α
attached␈αto␈α
it.␈α
A␈αfew␈α
years␈α
later,␈αsomeone␈α
else
␈↓ α∧␈↓criticizes␈αthe␈α
definition␈αas␈αlacking␈α
some␈αdesirable␈αproperty,␈α
proposes␈αa␈αchange,␈α
and␈αthe␈α
change␈αis
␈↓ α∧␈↓accepted.␈α
This␈αis␈α
normal,␈α
but␈αif␈α
the␈αoriginal␈α
definition␈α
expressed␈αwhat␈α
they␈αreally␈α
meant␈α
by␈αthe
␈↓ α∧␈↓length,␈αthey␈α
would␈αrefuse␈α
to␈αchange,␈α
arguing␈αthat␈α
the␈αnew␈α
concept␈αmay␈α
have␈αits␈α
uses,␈αbut␈α
it␈αisn't
␈↓ α∧␈↓what␈αthey␈α
mean␈αby␈α"length".␈α
This␈αshows␈αthat␈α
the␈αconcept␈αof␈α
"length"␈αas␈αa␈α
property␈αof␈α
objects␈αis
␈↓ α∧␈↓more␈α
stable␈α
than␈α
any␈α
operational␈α
definition.␈α Carnap␈α
has␈α
an␈α
interesting␈α
section␈α
in␈α
␈↓↓Meaning␈αand
␈↓ α∧␈↓↓Necessity␈↓␈α∂entitled␈α∂"The␈α∂Concept␈α∂of␈α∂Intension␈α∂for␈α∂a␈α∂Robot"␈α∂in␈α∂which␈α∂he␈α∂makes␈α∂a␈α⊂similar␈α∂point
␈↓ α∧␈↓saying,␈α␈↓↓"It␈αis␈αclear␈αthat␈αthe␈αmethod␈αof␈αstructural␈αanalysis,␈αif␈αapplicable,␈αis␈αmore␈αpowerful␈αthan␈αthe
␈↓ α∧␈↓↓behavioristic␈αmethod,␈αbecause␈αit␈α
can␈αsupply␈αa␈αgeneral␈α
answer,␈αand,␈αunder␈αfavorable␈α
circumstances,
␈↓ α∧␈↓↓even a complete answer to the question of the intension of a given predicate."␈↓
␈↓ α∧␈↓6.␈αWhether␈α
a␈αsystem␈αhas␈α
beliefs␈αand␈αother␈α
mental␈αqualities␈α
is␈αnot␈αprimarily␈α
a␈αmatter␈αof␈α
complexity
␈↓ α∧␈↓of␈αthe␈α
system.␈α Although␈α
cars␈αare␈α
more␈αcomplex␈αthan␈α
thermostats,␈αit␈α
is␈αhard␈α
to␈αascribe␈α
beliefs␈αor
␈↓ α∧␈↓goals␈αto␈αthem,␈αand␈αthe␈αsame␈αis␈αperhaps␈αtrue␈αof␈αthe␈αbasic␈αhardware␈αof␈αa␈αcomputer,␈αi.e.␈αthe␈αpart␈αof
␈↓ α∧␈↓the computer that executes the program without the program itself.
␈↓ α∧␈↓7.␈α
Our␈αown␈α
ability␈αto␈α
derive␈αthe␈α
laws␈αof␈α
higher␈αlevels␈α
of␈αorganization␈α
from␈αknowledge␈α
of␈αlower
␈↓ α∧␈↓level␈α
laws␈α
is␈α∞also␈α
limited␈α
by␈α
universality.␈α∞While␈α
the␈α
presentl␈α
accepted␈α∞laws␈α
of␈α
physics␈α∞allow␈α
only
␈↓ α∧␈↓one␈αchemistry,␈αthe␈αlaws␈αof␈αphysics␈αand␈αchemistry␈αallow␈αmany␈αbiologies,␈αand,␈αbecause␈αthe␈αneuron␈α
is
␈↓ α∧␈↓a␈αuniversal␈αcomputing␈αelement,␈αan␈αarbitrary␈αmental␈αstructure␈αis␈αallowed␈αby␈αbasic␈αneurophysiology.
␈↓ α∧␈↓Therefore,␈α∂to␈α⊂determine␈α∂human␈α⊂mental␈α∂structure,␈α⊂one␈α∂must␈α⊂make␈α∂psychological␈α⊂experiments,␈α∂␈↓↓or␈↓
␈↓ α∧␈↓determine␈α∂the␈α∂actual␈α∂anatomical␈α∂structure␈α∂of␈α∂the␈α∂brain␈α∂and␈α∂the␈α∂information␈α∂stored␈α∂in␈α∂it␈α∂.␈α∂One
␈↓ α∧␈↓cannot␈α∂determine␈α∂the␈α∂structure␈α∂of␈α⊂the␈α∂brain␈α∂merely␈α∂from␈α∂the␈α⊂fact␈α∂that␈α∂the␈α∂brain␈α∂is␈α⊂capable␈α∂of
␈↓ α∧␈↓certain␈α
problem␈α∞solving␈α
performance.␈α∞ In␈α
this␈α∞respect,␈α
our␈α∞position␈α
is␈α∞similar␈α
to␈α∞that␈α
of␈α∞the␈α
Life
␈↓ α∧␈↓robot.
␈↓ α∧␈↓␈↓ αT8.␈α
Philosophy␈αand␈α
artificial␈α
intelligence.␈α These␈α
fields␈αoverlap␈α
in␈α
the␈αfollowing␈α
way:␈αIn␈α
order
␈↓ α∧␈↓to␈α∞make␈α∞a␈α∞computer␈α∞program␈α∞behave␈α∞intelligently,␈α∞its␈α
designer␈α∞must␈α∞build␈α∞into␈α∞it␈α∞a␈α∞view␈α∞of␈α
the
␈↓ α∧␈↓world␈α⊃in␈α⊃general,␈α∩apart␈α⊃from␈α⊃what␈α∩they␈α⊃include␈α⊃about␈α∩particular␈α⊃sciences.␈α⊃ (The␈α∩skeptic␈α⊃who
␈↓ α∧␈↓doubts␈αwhether␈αthere␈α
is␈αanything␈αto␈αsay␈α
about␈αthe␈αworld␈αapart␈α
from␈αthe␈αparticular␈αsciences␈α
should
␈↓ α∧␈↓try␈αto␈αwrite␈α
a␈αcomputer␈αprogram␈α
that␈αcan␈αfigure␈αout␈α
how␈αto␈αget␈α
to␈αTimbuktoo,␈αtaking␈αinto␈α
account
␈↓ α∧␈↓not␈α
only␈α
the␈α
facts␈α
about␈αtravel␈α
in␈α
general␈α
but␈α
also␈αfacts␈α
about␈α
what␈α
people␈α
and␈α
documents␈αhave
␈↓ α∧␈↓what␈αinformation,␈αand␈αwhat␈αinformation␈αwill␈αbe␈αrequired␈αat␈αdifferent␈αstages␈αof␈αthe␈αtrip␈αand␈αwhen
␈↓ α∧␈↓and␈α
how␈α
it␈α
is␈α
to␈α
be␈α
obtained.␈α
He␈α
will␈α
rapidly␈α
discover␈α
that␈α
he␈α
is␈α
lacking␈α
a␈α
␈↓↓science␈α
of␈αcommon␈α
sense␈↓,
␈↓ α∧␈↓i.e.␈α∞he␈α∞will␈α∞be␈α∞unable␈α∞to␈α∞formally␈α∞express␈α∞and␈α∞build␈α∞into␈α∞his␈α∞program␈α∞"what␈α∞everybody␈α
knows".
␈↓ α∧␈↓Maybe␈αphilosophy␈αcould␈αbe␈αdefined␈αas␈αan␈αattempted␈α␈↓↓science␈αof␈αcommon␈αsense␈↓,␈αor␈αelse␈αthe␈α␈↓↓science␈α
of
␈↓ α∧␈↓↓common sense␈↓ should be a definite part of philosophy.)
␈↓ α∧␈↓␈↓ αTArtificial␈α⊂intelligence␈α∂has␈α⊂a␈α∂another␈α⊂component␈α∂in␈α⊂which␈α∂philosophers␈α⊂have␈α⊂not␈α∂studied,
␈↓ α∧␈↓namely␈α∪␈↓↓heuristics␈↓.␈α∪ Heuristics␈α∪is␈α∪concerned␈α∪with:␈α∪given␈α∩the␈α∪facts␈α∪and␈α∪a␈α∪goal,␈α∪how␈α∪should␈α∩it
␈↓ α∧␈↓investigate␈α
the␈α
possibilities␈αand␈α
decide␈α
what␈α
to␈αdo.␈α
On␈α
the␈α
other␈αhand,␈α
artificial␈α
intelligence␈αis␈α
not
␈↓ α∧␈↓much concerned with aesthetics and ethics.
␈↓ α∧␈↓␈↓ αTNot␈α∪all␈α∀approaches␈α∪to␈α∪philosophy␈α∀lead␈α∪to␈α∪results␈α∀relevant␈α∪to␈α∪the␈α∀artificial␈α∪intelligence
␈↓ α∧␈↓problem.␈α∞ On␈α∂the␈α∞face␈α∞of␈α∂it,␈α∞a␈α∞philosophy␈α∂that␈α∞entailed␈α∞the␈α∂view␈α∞that␈α∞artificial␈α∂intelligence␈α∞was
␈↓ α∧␈↓impossible␈α∂would␈α⊂be␈α∂unhelpful,␈α⊂but␈α∂besides␈α⊂that,␈α∂taking␈α⊂artificial␈α∂intelligence␈α⊂seriously␈α∂suggests
␈↓ α∧␈↓␈↓ εu21␈↓ ∧
␈↓ α∧␈↓some␈α
philosophical␈αpoints␈α
of␈αview.␈α
I␈αam␈α
not␈αsure␈α
that␈α
all␈αI␈α
shall␈αlist␈α
are␈αrequired␈α
for␈αpursuing␈α
the
␈↓ α∧␈↓AI goal - some of them may be just my prejudices - but here they are:
␈↓ α∧␈↓␈↓ β$8.1.␈αThe␈αrelation␈αbetween␈αa␈αworld␈αview␈αand␈αthe␈αworld␈αshould␈αbe␈αstudied␈αby␈αmethods
␈↓ α∧␈↓akin␈αto␈αmetamathematics␈αin␈αwhich␈αsystems␈α
are␈αstudied␈αfrom␈αthe␈αoutside.␈α In␈α
metamathematics␈αwe
␈↓ α∧␈↓study␈α∀the␈α∀relation␈α∀between␈α∪a␈α∀mathematical␈α∀system␈α∀and␈α∪its␈α∀models.␈α∀ Philosophy␈α∀(or␈α∪perhaps
␈↓ α∧␈↓␈↓↓metaphilosophy␈↓)␈αshould␈αstudy␈αthe␈αrelation␈α
between␈αworld␈αstructures␈αand␈αsystems␈αwithin␈α
them␈αthat
␈↓ α∧␈↓seek␈αknowledge.␈α Just␈αas␈αthe␈αmetamathematician␈α
can␈αuse␈αany␈αmathematical␈αmethods␈αin␈α
this␈αstudy
␈↓ α∧␈↓and␈αdistinguishes␈αthe␈α
methods␈αhe␈αuses␈α
form␈αthose␈αbeing␈α
studied,␈αso␈αthe␈α
philosopher␈αshould␈αuse␈α
all
␈↓ α∧␈↓his scientific knowledge in studying philosphical systems from the outside.
␈↓ α∧␈↓␈↓ αTThus␈α⊂the␈α∂question␈α⊂␈↓↓"How␈α⊂do␈α∂I␈α⊂know?"␈↓␈α∂is␈α⊂best␈α⊂answered␈α∂by␈α⊂studying␈α∂␈↓↓"How␈α⊂does␈α⊂it␈α∂know"␈↓,
␈↓ α∧␈↓getting␈αthe␈α
best␈αanswer␈αthat␈α
the␈αcurrent␈αstate␈α
of␈αscience␈αand␈α
philosophy␈αpermits,␈αand␈α
then␈αseeing
␈↓ α∧␈↓how this answer stands up to doubts about one's own sources of knowledge.
␈↓ α∧␈↓␈↓ β$8.2.␈α∂We␈α∂regard␈α∂␈↓↓metaphysics␈↓␈α∞as␈α∂the␈α∂study␈α∂of␈α∂the␈α∞general␈α∂structure␈α∂of␈α∂the␈α∂world␈α∞and
␈↓ α∧␈↓␈↓↓epistemology␈↓␈α
as␈α
studying␈α
what␈α
knowledge␈α
of␈α
the␈αworld␈α
can␈α
be␈α
had␈α
by␈α
an␈α
intelligence␈α
with␈αgiven
␈↓ α∧␈↓opportunities␈αto␈αobserve␈αand␈αexperiment.␈α We␈αneed␈αto␈αdistinguish␈αwhat␈αcan␈αbe␈αdetermined␈αabout
␈↓ α∧␈↓the␈α∃structure␈α∃of␈α⊗humans␈α∃and␈α∃machines␈α⊗by␈α∃scientific␈α∃research␈α∃over␈α⊗a␈α∃period␈α∃of␈α⊗time␈α∃and
␈↓ α∧␈↓experimenting␈αwith␈αmany␈αindividuals␈αfrom␈αwhat␈αcan␈αbe␈αlearned␈αby␈αin␈αa␈αparticular␈αsituation␈αwith
␈↓ α∧␈↓particular␈αopportunities␈α
to␈αobserve.␈α From␈α
the␈αAI␈α
point␈αof␈αview,␈α
the␈αlatter␈αis␈α
as␈αimportant␈α
as␈αthe
␈↓ α∧␈↓former,␈α⊃and␈α⊃we␈α∩suppose␈α⊃that␈α⊃philosophers␈α∩would␈α⊃also␈α⊃consider␈α∩it␈α⊃part␈α⊃of␈α∩epistemology.␈α⊃ The
␈↓ α∧␈↓possibilities␈α⊃of␈α⊂reductionism␈α⊃are␈α⊂also␈α⊃different␈α⊂for␈α⊃theoretical␈α⊂and␈α⊃everyday␈α⊃epistemology.␈α⊂ We
␈↓ α∧␈↓could␈α∂imagine␈α∂that␈α∂the␈α∂rules␈α∂of␈α∂everyday␈α∂epistemology␈α∂could␈α∂be␈α∂deduced␈α∂from␈α∂a␈α∂knowledge␈α∂of
␈↓ α∧␈↓physics␈αand␈αthe␈αstructure␈αof␈αthe␈αbeing␈αand␈αthe␈αworld,␈αbut␈αwe␈αcan't␈αsee␈αhow␈αone␈αcould␈αavoid␈αusing
␈↓ α∧␈↓mental concepts in expressing knowledge actually obtained by the senses.
␈↓ α∧␈↓␈↓ β$8.3.␈α
It␈α∞is␈α
now␈α
accepted␈α∞that␈α
the␈α
basic␈α∞concepts␈α
of␈α
physical␈α∞theories␈α
are␈α∞far␈α
removed
␈↓ α∧␈↓from␈α∩observation.␈α∩ The␈α∩human␈α∩sense␈α∩organs␈α∩are␈α∩many␈α∩levels␈α∩of␈α∩organization␈α∪removed␈α∩from
␈↓ α∧␈↓quantum␈α∪mechanical␈α∪states,␈α∪and␈α∀we␈α∪have␈α∪learned␈α∪to␈α∀accept␈α∪the␈α∪complication␈α∪this␈α∀causes␈α∪in
␈↓ α∧␈↓verifying␈α
physical␈α∞theories.␈α
Experience␈α
in␈α∞trying␈α
to␈α
make␈α∞intelligent␈α
computer␈α∞programs␈α
suggests
␈↓ α∧␈↓that␈α⊂the␈α⊃basic␈α⊂concepts␈α⊂of␈α⊃the␈α⊂common␈α⊂sense␈α⊃world␈α⊂are␈α⊂also␈α⊃complex␈α⊂and␈α⊂not␈α⊃always␈α⊂directly
␈↓ α∧␈↓accessible␈α∂to␈α∂observation.␈α∂ In␈α∞particular,␈α∂the␈α∂common␈α∂sense␈α∂world␈α∞is␈α∂not␈α∂a␈α∂construct␈α∂from␈α∞sense
␈↓ α∧␈↓data,␈αbut␈αsense␈αdata␈αplay␈αan␈αimportant␈αrole.␈α When␈αa␈αman␈αor␈αa␈αcomputer␈αprogram␈αsees␈αa␈αdog,␈αwe
␈↓ α∧␈↓will␈α⊃need␈α⊃both␈α⊃the␈α∩relation␈α⊃between␈α⊃the␈α⊃observer␈α⊃and␈α∩the␈α⊃dog␈α⊃and␈α⊃the␈α⊃relation␈α∩between␈α⊃the
␈↓ α∧␈↓observer and the brown patch in order to construct a good theory of the event.
␈↓ α∧␈↓␈↓ β$8.4.␈α
In␈α
spirit␈α
this␈α
paper␈α
is␈α
materialist,␈α
but␈α
it␈α
is␈α
logically␈α
compatible␈α
with␈α
some␈αother
␈↓ α∧␈↓philosophies.␈α∂ Thus␈α⊂cellular␈α∂automaton␈α∂models␈α⊂of␈α∂the␈α∂physical␈α⊂world␈α∂may␈α∂be␈α⊂supplemented␈α∂by
␈↓ α∧␈↓supposing␈αthat␈αcertain␈αcomplex␈αconfigurations␈α
interact␈αwith␈αadditional␈αautomata␈αcalled␈α
souls␈αthat
␈↓ α∧␈↓also␈α∪interact␈α∩with␈α∪each␈α∩other.␈α∪ Such␈α∩␈↓↓interactionist␈α∪dualism␈↓␈α∩won't␈α∪meet␈α∩emotional␈α∪or␈α∩spiritual
␈↓ α∧␈↓objections␈αto␈αmaterialism,␈αbut␈αit␈αdoes␈αprovide␈αa␈αlogical␈αniche␈αfor␈αany␈αempirically␈αargued␈αbelief␈αin
␈↓ α∧␈↓telepathy,␈α
communication␈α∞with␈α
the␈α
dead␈α∞and␈α
other␈α
psychic␈α∞phenomena.␈α
A␈α
person␈α∞who␈α
believed
␈↓ α∧␈↓the␈α
alleged␈αevidence␈α
for␈αsuch␈α
phenomena␈αand␈α
still␈α
wanted␈αa␈α
scientific␈αexplanation␈α
could␈αmodel␈α
his
␈↓ α∧␈↓beliefs with auxiliary automata.
␈↓ α∧␈↓␈↓ εu22␈↓ ∧
␈↓ α∧␈↓αREFERENCES
␈↓ α∧␈↓␈↓αCarnap, Rudolf␈↓ (1956), ␈↓↓Meaning and Necessity␈↓, University of Chicago Press.
␈↓ α∧␈↓␈↓αMcCarthy,␈α∪J.␈α∩and␈α∪Hayes,␈α∩P.J.␈↓␈α∪(1969)␈α∪Some␈α∩Philosophical␈α∪Problems␈α∩from␈α∪the␈α∪Standpoint␈α∩of
␈↓ α∧␈↓Artificial␈α∩Intelligence.␈α∪␈↓↓Machine␈α∩Intelligence␈α∩4␈↓,␈α∪pp.␈α∩463-502␈α∩(eds␈α∪Meltzer,␈α∩B.␈α∩and␈α∪Michie,␈α∩D.).
␈↓ α∧␈↓Edinburgh: Edinburgh University Press.
␈↓ α∧␈↓→→→→→→→→More references will be supplied←←←←←←←←
␈↓ α∧␈↓John McCarthy
␈↓ α∧␈↓Artificial Intelligence Laboratory
␈↓ α∧␈↓Stanford University
␈↓ α∧␈↓Stanford, California 94305
␈↓ α∧␈↓␈↓ εu23␈↓ ∧